Optimization Online


A parallel splitting ALM-based algorithm for separable convex programming

Shengjie Xu (xsjnsu***at***163.com)
Bingsheng He (hebma***at***nju.edu.cn)

Abstract: The augmented Lagrangian method (ALM) provides a benchmark for tackling the canonical convex minimization problem with linear constraints. We consider a special case where the objective function is the sum of $m$ individual subfunctions without coupled variables. The recent study reveals that the direct extension of ALM for separable convex programming problems is not necessarily convergent, even though it is empirically efficient for many applications. It has thus inspired a number of variants. In this paper, for parallel computing, we propose a novel operator splitting method for solving the separable convex minimization problem. We first show that a fully Jacobian decomposition of the regularized ALM can contribute a descent direction yielding the contraction of proximity to the solution set; then, we generate the new iterate via a simple correction step with a constant step size. We establish the convergence analysis for the proposed method, and then illustrate its numerical efficiency by the latent variable Gaussian graphical model selection problem arising in statistical learning.

Keywords: Convex programming, Augmented Lagrangian method, Jacobian decomposition, Parallel computing, Operator splitting methods

Category 1: Convex and Nonsmooth Optimization

Category 2: Convex and Nonsmooth Optimization (Convex Optimization )

Category 3: Optimization Software and Modeling Systems (Parallel Algorithms )


Download: [PDF]

Entry Submitted: 01/10/2020
Entry Accepted: 01/10/2020
Entry Last Modified: 05/04/2021

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society