Optimization Online


Fixing and extending some recent results on the ADMM algorithm

Sebastian Banert(sebastian.banert***at***univie.ac.at)
Radu Ioan Bot(radu.bot***at***univie.ac.at)
Ernö Robert Csetnek(ernoe.robert.csetnek***at***univie.ac.at)

Abstract: We first point out several flaws in the recent paper {\it [R. Shefi, M. Teboulle: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization, SIAM J. Optim. 24, 269--297, 2014]} that proposes two ADMM-type algorithms for solving convex optimization problems involving compositions with linear operators and show how some of the considered arguments can be fixed. Besides this, we formulate a variant of the ADMM algorithm that is able to handle convex optimization problems involving an additional smooth function in its objective, and which is evaluated through its gradient. Moreover, in each iteration we allow the use of variable metrics, while the investigations are carried out in the setting of infinite dimensional Hilbert spaces. This algorithmic scheme is investigated from point of view of its convergence properties.

Keywords: ADMM algorithm, Lagrangian, saddle points, subdifferential, convex optimization, Fenchel duality

Category 1: Convex and Nonsmooth Optimization (Convex Optimization )


Download: [PDF]

Entry Submitted: 12/16/2016
Entry Accepted: 12/16/2016
Entry Last Modified: 12/16/2016

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society