-

 

 

 




Optimization Online





 

HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent

Feng Niu (leonn***at***cs.wisc.edu)
Benjamin Recht (brecht***at***cs.wisc.edu)
Re Christopher (chrisre***at***cs.wisc.edu)
Stephen Wright (swright***at***cs.wisc.edu)

Abstract: Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented *without any locking*. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.

Keywords: Stochastic gradient descent, Incremental gradient methods, Machine learning, Parallel computing, Multicore

Category 1: Nonlinear Optimization

Category 2: Stochastic Programming

Category 3: Applications -- Science and Engineering (Data-Mining )

Citation: Unpublished. University of Wisconsin-Madison, June 2011.

Download: [PDF]

Entry Submitted: 06/28/2011
Entry Accepted: 06/28/2011
Entry Last Modified: 11/11/2011

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society