-

 

 

 




Optimization Online





 

An Adaptive Augmented Lagrangian Method for Large-Scale Constrained Optimization

Frank E. Curtis (frank.e.curtis***at***gmail.com)
Hao Jiang (hjiang13***at***jhu.edu)
Daniel P. Robinson (daniel.p.robinson***at***jhu.edu)

Abstract: We propose an augmented Lagrangian algorithm for solving large-scale constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augmented Langrangian framework, such as its ability to be implemented matrix-free. This is important as this feature of augmented Lagrangian methods is responsible for renewed interests in employing such methods for solving large-scale problems. We provide convergence results from remote starting points and illustrate by a set of numerical experiments that our method outperforms traditional augmented Lagrangian methods in terms of critical performance measures.

Keywords: nonlinear optimization, nonconvex optimization, large-scale optimization, augmented Lagrangians, matrix-free methods, steering methods

Category 1: Nonlinear Optimization

Citation: F. E. Curtis, H. Jiang, and D. P. Robinson. An Adaptive Augmented Lagrangian Method for Large-Scale Constrained Optimization. Mathematical Programming, 152(12):201-245, 2015.

Download:

Entry Submitted: 12/17/2012
Entry Accepted: 12/17/2012
Entry Last Modified: 01/02/2017

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society