A predictor-corrector path-following algorithm for dual-degenerate parametric optimization problems

Most path-following algorithms for tracing a solution path of a parametric nonlinear optimization problem are only certifiably convergent under strong regularity assumptions about the problem functions, in particular, the linear independence of the constraint gradients at the solutions, which implies a unique multiplier solution for every nonlinear program. In this paper we propose and prove convergence results for a procedure designed to solve problems satisfying a weaker set of conditions, allowing for non-unique (but bounded) multipliers, applying known sensitivity results for this class of problems. Each iteration along the path consists of three parts: (1) a Newton corrector step for the primal and dual variables, this is obtained by solving a linear system of equations. (2) a tangential predictor step for the primal and dual variables, which is found as the solution of a quadratic programming problem, and (3) a jump step for the dual variables, which is found as the solution of a linear programming problem. We present a convergence proof, and demonstrate the successful solution tracking of the algorithm numerically on a couple of illustrative examples.

Citation

Technical Report, Norwegian University of Science and Technology, March 2016

Article

Download

View A predictor-corrector path-following algorithm for dual-degenerate parametric optimization problems