Optimization Online


An Augmented Lagrangian based Algorithm for Distributed Non-Convex Optimization

Boris Houska (boris.houska***at***gmx.de)
Janick Frasch (frasch***at***ovgu.de)
Moritz Diehl (moritz.diehl***at***imtek.uni-freiburg.de)

Abstract: This paper is about distributed derivative-based algorithms for solving optimization problems with a separable (potentially nonconvex) objective function and coupled affine constraints. A parallelizable method is proposed that combines ideas from the fields of sequential quadratic programming and augmented Lagrangian algorithms. The method negotiates shared dual variables that may be interpreted as prices, a concept employed in dual decomposition methods and the alternating direction method of multipliers (ADMM). Here, each agent solves its own small-scale nonlinear programming problem and communicates with other agents by solving coupled quadratic programming problems. These coupled quadratic programming problems have equality constraints for which parallelizable methods are available. The use of techniques associated with standard sequential quadratic programming (SQP) methods gives a method with superlinear or quadratic convergence rate under suitable conditions. This is in contrast to existing decomposition methods, such as ADMM, which have a linear convergence rate. It is shown how the proposed algorithm may be extended using globalization techniques that guarantee convergence to a local minimizer from any initial starting point.

Keywords: Nonlinear Optimization, Distributed Optimization, Numerical Algorithms

Category 1: Nonlinear Optimization (Constrained Nonlinear Optimization )


Download: [PDF]

Entry Submitted: 07/04/2014
Entry Accepted: 07/04/2014
Entry Last Modified: 03/20/2016

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society