-

 

 

 




Optimization Online





 

A Stable Iterative Method for Linear Programming

Maria Gonzalez-Lima (mgl***at***cesma.usb.ve)
Hua Wei (h3wei***at***math.uwaterloo.ca)
Henry Wolkowicz (hwolkowicz***at***uwaterloo.ca)

Abstract: This paper studies a new primal-dual interior/exterior-point method for linear programming. We begin with the usual perturbed primal-dual optimality equations $F_\mu(x,y,z)=0$. Under nondegeneracy assumptions, this nonlinear system is well-posed, i.e. it has a nonsingular Jacobian at optimality and is not necessarily ill-conditioned as the iterates approach optimality. We use a simple preprocessing step to eliminate both the primal and dual feasibility equations. This results in one single bilinear equation that maintains the well-posedness property. We then apply a preconditioned conjugate gradient method (PCG), within an inexact Newton framework, directly on the linearized equations. This is done without forming the usual {\em normal equations, NEQ,} or {\em augmented} system. Sparsity is maintained. The work of an iteration consists almost entirely in the (approximate) solution of this well-posed linearized system, using PCG. Therefore, improvements depend on efficient preconditioning. Exact primal and dual feasibility are guaranteed throughout the iterations when starting feasible. Since the linearization is well posed, standard preconditioners can be applied. And we can use affine scaling and not maintain positivity once we are close enough to the optimum, i.e. we apply a {\em crossover} technique. This allows for {\em warm starts} with perturbed data. In addition, we correctly identify some of the primal and dual variables that converge to 0 and delete them (purify step). Therefore, we get smaller systems as the iterations progress. These techniques reduce the number and complexity of the iterations. We present numerical examples where roundoff error causes problems for NEQ and present numerical tests with direct solvers as well as iterative solvers with both {\em diagonal} and {\em Cholesky type} preconditioners. Our tests show that our method takes direct advantage of sparsity and stability of the data. We include warm start tests for perturbed problems.

Keywords: Linear Programming, large sparse problems, preconditioned conjugate gradients

Category 1: Linear, Cone and Semidefinite Programming (Linear Programming )

Citation: CORR 2004-26 Department Combinatorics and Optimization University of Waterloo

Download: [Postscript][PDF]

Entry Submitted: 10/12/2004
Entry Accepted: 10/12/2004
Entry Last Modified: 12/13/2007

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Programming Society