-

 

 

 




Optimization Online





 

On Nonconvex Decentralized Gradient Descent

Jinshan Zeng (jsh.zeng***at***gmail.com)
Wotao Yin (wotaoyin***at***ucla.edu)

Abstract: Consensus optimization has received considerable attention in recent years. A number of decentralized algorithms have been proposed for {convex} consensus optimization. However, to the behaviors or consensus \emph{nonconvex} optimization, our understanding is more limited. When we lose convexity, we cannot hope our algorithms always return global solutions though they sometimes still do sometimes. Somewhat surprisingly, the decentralized consensus algorithms, DGD and Prox-DGD, retain most other properties that are known in the convex setting. In particular, when diminishing (or constant) step sizes are used, we can prove convergence to a (or a neighborhood of) consensus stationary solution and have guaranteed rates of convergence. It is worth noting that Prox-DGD can handle nonconvex nonsmooth functions if their proximal operators can be computed. Such functions include SCAD and $\ell_q$ quasi-norms, $q\in[0,1)$. Similarly, Prox-DGD can take the constraint to a nonconvex set with an easy projection. To establish these properties, we have to introduce a completely different line of analysis, as well as modify existing proofs that were used the convex setting.

Keywords: Nonconvex dencentralized computing, consensus optimization, decentralized gradient descent method, proximal decentralized gradient descent

Category 1: Network Optimization

Category 2: Convex and Nonsmooth Optimization (Nonsmooth Optimization )

Citation:

Download: [PDF]

Entry Submitted: 02/11/2017
Entry Accepted: 03/01/2017
Entry Last Modified: 06/13/2017

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society