-

 

 

 




Optimization Online





 

Asynchronous Stochastic Subgradient Methods for General Nonsmooth Nonconvex Optimization

Vyacheslav Kungurtsev(vyacheslav.kungurtsev***at***fel.cvut.cz)
Malcolm Egan(malcom.egan***at***insa-lyon.fr)
Bapi Chatterjee(bapi.chatterjee***at***ist.ac.at)
Dan Alistarh(dan.alistarh***at***ist.ac.at)

Abstract: Asynchronous distributed methods are a popular way to reduce the communication and synchronization costs of large-scale optimization. Yet, for all their success, little is known about their convergence guarantees in the challenging case of general non-smooth, non-convex objectives, beyond cases where closed-form proximal operator solutions are available. This is all the more surprising since these objectives are the ones appearing in the training of deep neural networks. In this paper, we introduce the first convergence analysis covering asynchronous methods in the case of general non-smooth, non-convex objectives. Our analysis applies to stochastic sub-gradient descent methods both with and without block variable partitioning, and both with and without momentum. It is phrased in the context of a general probabilistic model of asynchronous scheduling accurately adapted to modern hardware properties. We validate our analysis experimentally in the context of training deep neural network architectures. We show their overall successful asymptotic convergence as well as exploring how momentum, synchronization, and partitioning all affect performance.

Keywords:

Category 1: Convex and Nonsmooth Optimization (Nonsmooth Optimization )

Citation:

Download: [PDF]

Entry Submitted: 05/28/2019
Entry Accepted: 05/28/2019
Entry Last Modified: 05/28/2019

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society