-

 

 

 




Optimization Online





 

The primal-dual hybrid gradient method reduces to a primal method for linearly constrained optimization problems

Yura Malitsky (y.malitsky***at***gmail.com)

Abstract: In this work, we show that for linearly constrained optimization problems the primal-dual hybrid gradient algorithm, analyzed by Chambolle and Pock [3], can be written as an entirely primal algorithm. This allows us to prove convergence of the iterates even in the degenerate cases when the linear system is inconsistent or when the strong duality does not hold. We also obtain new convergence rates which seem to improve existing ones in the literature. For a decentralized distributed optimization we show that the new scheme is much more efficient than the original one.

Keywords: First-order algorithms, primal-dual algorithms, convergence rates, linearly constrained optimization problem, penalty methods, distributed optimization

Category 1: Convex and Nonsmooth Optimization (Convex Optimization )

Category 2: Convex and Nonsmooth Optimization (Nonsmooth Optimization )

Citation:

Download: [PDF]

Entry Submitted: 08/27/2018
Entry Accepted: 08/27/2018
Entry Last Modified: 08/29/2018

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society