-

 

 

 




Optimization Online





 

Proximal augmented Lagrangian method for convex optimization with linear inequality constraints

Shengjie Xu(xsjnsu***at***163.com)
Jing Yuan(jyuan***at***xidian.edu.cn)
Bingsheng He(hebma***at***nju.edu.cn)

Abstract: The augmented Lagrangian method (ALM) provides a classical benchmark for solving convex optimization problem with linear equality constraints: \begin{equation*} \min\{\theta(x) \;\big|\; Ax= b,\;x\in{\cal X} \}. \end{equation*} In practice, the proximal regularized version of ALM, by adding an additional positive-definite quadratic proximal term to the estimation of primal variables, is often implemented to tackle many problems and its convergence has been well studied in literature. The most recent study reveals an interesting fact that the positive definite regularization term is not required to ensure the convergence of the proximal regularized ALM, which hence results in an indefinite regularized proximal ALM and larger step-size for speeding up convergence. In this work, we propose a novel indefinite regularized ALM framework to study the linear inequality constrained convex optimization problem: \begin{equation*}\min\{\theta(x) \;\big|\; Ax\geq b,\;x\in{\cal X} \}.\end{equation*} The new methods enjoy great advantages in two folds: for the first method, it linearizes the core subproblem of ALM, and the relaxation of indefinite proximal regularization allows bigger step size to potentially reduce required convergence iterations; for second method, it does not require estimating the maximum eigenvalue of $A^TA$, and has a same computational complexity with ALM;. Moreover, we establish the global convergence theory of the proposed methods and show its worst-case $\mathcal{O}(1/t)$ convergence rate. Numerical experiments demonstrate that the introduced indefinite linearized augmented Lagrangian method is efficient in solving practical problems, such as SVM and image segmentation.

Keywords: convex programming, augmented Lagrangian method, convergence rate, inequality constraint, image segmentation

Category 1: Convex and Nonsmooth Optimization

Category 2: Convex and Nonsmooth Optimization (Convex Optimization )

Citation:

Download: [PDF]

Entry Submitted: 05/10/2019
Entry Accepted: 05/10/2019
Entry Last Modified: 05/10/2019

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society