-

 

 

 




Optimization Online





 

ADMM for Multiaffine Constrained Optimization

Wenbo Gao (wg2279***at***columbia.edu)
Donald Goldfarb (goldfarb***at***columbia.edu)
Frank E. Curtis (frank.e.curtis***at***gmail.com)

Abstract: We propose an expansion of the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain easily verifiable assumptions, converges to the set of constrained stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the Kurdyka-Lojasiewicz (K-L) property holds, this is strengthened to convergence to a single constrained stationary point. Our analysis applies under assumptions that we have endeavored to make as weak as possible. It applies to problems that involve nonconvex and/or nonsmooth objective terms, in addition to the multiaffine constraints that can involve multiple (three or more) blocks of variables. To illustrate the applicability of our results, we describe examples including nonnegative matrix factorization, sparse learning, risk parity portfolio selection, nonconvex formulations of convex problems, and neural network training. In each case, our ADMM approach encounters only subproblems that have closed-form solutions.

Keywords: nonlinear optimization, multiaffine constraints, nonconvex optimization, ADMM, alternating minimization methods

Category 1: Nonlinear Optimization (Constrained Nonlinear Optimization )

Category 2: Convex and Nonsmooth Optimization (Nonsmooth Optimization )

Citation:

Download: [PDF]

Entry Submitted: 02/26/2018
Entry Accepted: 02/26/2018
Entry Last Modified: 08/29/2018

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society