An Adaptive Augmented Lagrangian Method for Large-Scale Constrained Optimization
Frank E. Curtis (frank.e.curtisgmail.com)
Abstract: We propose an augmented Lagrangian algorithm for solving large-scale constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augmented Langrangian framework, such as its ability to be implemented matrix-free. This is important as this feature of augmented Lagrangian methods is responsible for renewed interests in employing such methods for solving large-scale problems. We provide convergence results from remote starting points and illustrate by a set of numerical experiments that our method outperforms traditional augmented Lagrangian methods in terms of critical performance measures.
Keywords: nonlinear optimization, nonconvex optimization, large-scale optimization, augmented Lagrangians, matrix-free methods, steering methods
Category 1: Nonlinear Optimization
Citation: F. E. Curtis, H. Jiang, and D. P. Robinson. An Adaptive Augmented Lagrangian Method for Large-Scale Constrained Optimization. Mathematical Programming, 152(1–2):201-245, 2015.
Entry Submitted: 12/17/2012
Modify/Update this entry
|Visitors||Authors||More about us||Links|
Search, Browse the Repository
Give us feedback
|Optimization Journals, Sites, Societies|