-

 

 

 




Optimization Online





 

A Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets

Nicolas Le Roux(nicolas***at***le-roux.name)
Mark Schmidt(mark.schmidt***at***inria.fr)
Francis Bach(francis.bach***at***ens.fr)

Abstract: We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. Numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms.

Keywords: stochastic optimization; linear convergence; average gradient

Category 1: Convex and Nonsmooth Optimization (Convex Optimization )

Category 2: Global Optimization (Stochastic Approaches )

Citation: Technical Report HAL 00674995, 2012

Download: [PDF]

Entry Submitted: 02/28/2012
Entry Accepted: 02/28/2012
Entry Last Modified: 02/28/2012

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society