Optimization Online


Regularized Sequential Quadratic Programming

Philip Gill (pgill***at***ucsd.edu)
Daniel Robinson (daniel.p.robinson***at***jhu.edu)

Abstract: We present the formulation and analysis of a new sequential quadratic programming (\SQP) method for general nonlinearly constrained optimization. The method pairs a primal-dual generalized augmented Lagrangian merit function with a \emph{flexible} line search to obtain a sequence of improving estimates of the solution. This function is a primal-dual variant of the augmented Lagrangian proposed by Hestenes and Powell in the early 1970s. A crucial feature of the method is that the \QP{} subproblems are convex, but formed from the exact second derivatives of the original problem. This is in contrast to methods that use a less accurate quasi-Newton approximation. Additional benefits of this approach include the following: (i) each \QP{} subproblem is regularized; (ii) the \QP{} subproblem always has a known feasible point; and (iii) a projected gradient method may be used to identify the \QP{} active set when far from the solution.

Keywords: Nonlinear programming, nonlinear constraints, augmented Lagrangian, sequential quadratic programming, SQP methods, regularized methods, primal-dual methods

Category 1: Nonlinear Optimization (Constrained Nonlinear Optimization )

Citation: UCSD Technical Report NA-11-02

Download: [PDF]

Entry Submitted: 10/30/2011
Entry Accepted: 10/30/2011
Entry Last Modified: 10/31/2011

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society