Addressing rank degeneracy in constraint-reduced interior-point methods for linear optimization

In earlier work (Tits et al., SIAM J. Optim., 17(1):119--146, 2006; Winternitz et al., COAP, doi=10.1007/s10589-010-9389-4, 2011), the present authors and their collaborators proposed primal-dual interior-point (PDIP) algorithms for linear optimization that, at each iteration, use only a subset of the (dual) inequality constraints in constructing the search direction. For problems with many more constraints than (dual) variables, this can yield a major speedup in the computation of search directions. However, in order for the Newton-like PDIP steps to be well defined, it is necessary that the gradients of the constraints included in the working set span the full dual space. In practice, in particular in the case of highly sparse problems, this often results in an undesirably large working set---or in an expensive trial-and-error process for its selection. In this paper we present two approaches that remove this non-degeneracy requirement, while retaining the convergence results obtained in the earlier work.

Citation

Technical Report, Institute for Systems Research, University of Maryland, College Park, revised, November 2012.

Article

Download

View Addressing rank degeneracy in constraint-reduced interior-point methods for linear optimization