-

 

 

 




Optimization Online





 

Discerning the linear convergence of ADMM for structured convex optimization through the lens of variational analysis

Xiaoming Yuan (xmyuan***at***hku.hk)
Shangzhi Zeng (zengsz***at***connect.hku.hk)
Jin Zhang (zhangjin198637***at***gmail.com)

Abstract: Despite the rich literature, the linear convergence of alternating direction method of multipliers (ADMM) has not been fully understood even for the convex case. For example, the linear convergence of ADMM can be empirically observed in a wide range of applications, while existing theoretical results seem to be too stringent to be satisfied or too ambiguous to be checked and thus why the ADMM performs linear convergence for these applications still seems to be unclear. In this paper, we systematically study the linear convergence of ADMM in the context of convex optimization through the lens of variaitonal analysis. We show that the linear convergence of ADMM can be guaranteed without the strong convexity of objective functions together with the full rank assumption of the coefficient matrices, or the full polyhedricity assumption of their subdifferential; and it is possible to discern the linear convergence for various concrete applications, especially for some representative models arising in statistical learning. We use some variational analysis techniques sophisticatedly; and our analysis is conducted in the most general proximal version of ADMM with Fortin and Glowinski's larger step size so that all major variants of the ADMM known in the literature are covered. We also deepen our discussion via the dual perspective and show, as byproducts, how to discern the linear convergence of other methods which are highly relevant to various variants of the ADMM, including the Douglas-Rachford splitting method in the general operator form and the primal-dual hybrid gradient method for saddle-point problems.

Keywords: Convex programming, variational analysis, alternating direction method of multipliers, linear convergence, calmness, metric subregularity, statistical learning.

Category 1: Convex and Nonsmooth Optimization

Category 2: Convex and Nonsmooth Optimization (Nonsmooth Optimization )

Citation: First version: August 2018

Download: [PDF]

Entry Submitted: 10/23/2018
Entry Accepted: 10/23/2018
Entry Last Modified: 11/16/2018

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society