-

 

 

 




Optimization Online





 

Optimal Distributed Online Prediction using Mini-Batches

Ofer Dekel(oferd***at***microsoft.com)
Ran Gilad-Bachrach(rang***at***microsoft.com)
Ohad Shamir(ohadsh***at***microsoft.com)
Lin Xiao(lin.xiao***at***microsoft.com)

Abstract: Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work we present the distributed mini-batch algorithm, a method of converting any serial gradient-based online prediction algorithm into a distributed algorithm. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. Our method can also be used to solve the closely-related distributed stochastic optimization problem, attaining an asymptotically linear speedup. We demonstrate the merits of our approach on a web-scale online prediction problem.

Keywords: online convex optimization, distributed learning, stochastic optimization, parallel computing

Category 1: Convex and Nonsmooth Optimization

Category 2: Stochastic Programming

Category 3: Applications -- Science and Engineering (Data-Mining )

Citation: Technical Report, Microsoft Research. 1 Microsoft Way, Redmond, WA 98052.

Download: [PDF]

Entry Submitted: 02/18/2011
Entry Accepted: 02/18/2011
Entry Last Modified: 02/18/2011

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Programming Society