- The Rate of Convergence of Augmented Lagrange Method for a Composite Optimization Problem Liwei Zhang (lwzhangdlut.edu.cn) Jihong Zhang (jhzhangmail.dult.edu.cn) Yule Zhang (ylzhangmail.dlut.edu.cn) Abstract: In this paper we analyze the rate of local convergence of the augmented Lagrange method for solving optimization problems with equality constraints and the objective function expressed as the sum of a convex function and a twice continuously differentiable function. The presence of the non-smoothness of the convex function in the objective requires extensive tools such as the second-order variational analysis on the Moreau-Yosida regularization of a convex function and matrix techniques for estimating the generalized Hessian of the dual function defined by the augmented Lagrange. With two conditions, we prove that, the rate of convergence for the augmented Lagrange method is linear and the ratio constant is proportional to $1/c$, where $c$ is the penalty parameter that exceeds a threshold $\overline c > 0$. As an illustrative example, for nonlinear semidefinite programming problem, we show that the assumptions used in Sun et al. \cite{SSZhang2008}, for the rate of convergence of the augmented Lagrange method, are sufficient to the two conditions adopted in this paper. Keywords: composite optimization problem, augmented Lagrange method, the rate of convergence, Moreau-Yosida regularization, variational analysis Category 1: Applications -- OR and Management Sciences Category 2: Convex and Nonsmooth Optimization (Nonsmooth Optimization ) Category 3: Nonlinear Optimization Citation: Download: [PDF]Entry Submitted: 11/16/2016Entry Accepted: 11/16/2016Entry Last Modified: 11/17/2016Modify/Update this entry Visitors Authors More about us Links Subscribe, Unsubscribe Digest Archive Search, Browse the Repository Submit Update Policies Coordinator's Board Classification Scheme Credits Give us feedback Optimization Journals, Sites, Societies Optimization Online is supported by the Mathematical Optmization Society.