Optimization Online


The Rate of Convergence of Augmented Lagrange Method for a Composite Optimization Problem

Liwei Zhang (lwzhang***at***dlut.edu.cn)
Jihong Zhang (jhzhang***at***mail.dult.edu.cn)
Yule Zhang (ylzhang***at***mail.dlut.edu.cn)

Abstract: In this paper we analyze the rate of local convergence of the augmented Lagrange method for solving optimization problems with equality constraints and the objective function expressed as the sum of a convex function and a twice continuously differentiable function. The presence of the non-smoothness of the convex function in the objective requires extensive tools such as the second-order variational analysis on the Moreau-Yosida regularization of a convex function and matrix techniques for estimating the generalized Hessian of the dual function defined by the augmented Lagrange. With two conditions, we prove that, the rate of convergence for the augmented Lagrange method is linear and the ratio constant is proportional to $1/c$, where $c$ is the penalty parameter that exceeds a threshold $\overline c > 0$. As an illustrative example, for nonlinear semidefinite programming problem, we show that the assumptions used in Sun et al. \cite{SSZhang2008}, for the rate of convergence of the augmented Lagrange method, are sufficient to the two conditions adopted in this paper.

Keywords: composite optimization problem, augmented Lagrange method, the rate of convergence, Moreau-Yosida regularization, variational analysis

Category 1: Applications -- OR and Management Sciences

Category 2: Convex and Nonsmooth Optimization (Nonsmooth Optimization )

Category 3: Nonlinear Optimization


Download: [PDF]

Entry Submitted: 11/16/2016
Entry Accepted: 11/16/2016
Entry Last Modified: 11/17/2016

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society