-

 

 

 




Optimization Online





 

Second-order Guarantees of Distributed gradient Algorithms

Amir Daneshmand (adaneshm***at***purdue.edu)
Gesualdo Scutari (gscutari***at***purdue.edu)
Vyacheslav Kpungurtsev (vyacheslav.kungurtsev***at***fel.cvut.cz)

Abstract: We consider distributed smooth nonconvex unconstrained optimization over networks, modeled as a connected graph. We examine the behavior of distributed gradient-based algorithms near strict saddle points. Specifically, we establish that (i) the renowned Distributed Gradient Descent (DGD) algorithm likely converges to a neighborhood of a Second-order Stationary (SoS) solution; and (ii) the more recent class of distributed algorithms based on gradient tracking--implementable also over digraphs--likely converges to exact SoS solutions, thus avoiding strict saddle-points. Furthermore, a convergence rate is provided for the latter class of algorithms.

Keywords: Distributed Gradient Methods, Gradient Tracking, Nonconvex Optimization

Category 1: Nonlinear Optimization

Citation: preprint, Purdue University Technical Report, September 23, 2018

Download: [PDF]

Entry Submitted: 09/23/2018
Entry Accepted: 09/23/2018
Entry Last Modified: 10/09/2018

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society