HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
Feng Niu (leonncs.wisc.edu)
Abstract: Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented *without any locking*. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.
Keywords: Stochastic gradient descent, Incremental gradient methods, Machine learning, Parallel computing, Multicore
Category 1: Nonlinear Optimization
Category 2: Stochastic Programming
Category 3: Applications -- Science and Engineering (Data-Mining )
Citation: Unpublished. University of Wisconsin-Madison, June 2011.
Entry Submitted: 06/28/2011
Modify/Update this entry
|Visitors||Authors||More about us||Links|
Search, Browse the Repository
Give us feedback
|Optimization Journals, Sites, Societies|