-

 

 

 




Optimization Online





 

A second derivative SQP method: local convergence

Nicholas I. M. Gould(nick.gould***at***stfc.ac.uk)
Daniel P. Robinson(daniel.robinson***at***comlab.ox.ac.uk)

Abstract: Gould and Robinson (NAR 08/18, Oxford University Computing Laboratory, 2008) gave global convergence results for a second-derivative SQP method for minimizing the exact $\ell_1$-merit function for a \emph{fixed} value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix $B_k$ used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix $B_k$---a simple diagonal approximation and a more sophisticated \emph{limited}-memory BFGS update. We also analyze a strategy for updating the penalty parameter based on \emph{approximately} minimizing the $\ell_1$-penalty function over a sequence of increasing values of the penalty parameter. Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a \emph{nonmonotone} variant of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set.

Keywords: nonlinear programming, sequential quadratic programming, nonsmooth optimization, $\ell_1$-penalty function

Category 1: Nonlinear Optimization (Constrained Nonlinear Optimization )

Citation: Numerical Analysis Report 08/21, Oxford University Computing Laboratory.

Download: [PDF]

Entry Submitted: 12/30/2008
Entry Accepted: 01/08/2009
Entry Last Modified: 12/30/2008

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Programming Society