-

 

 

 




Optimization Online





 

A Sparsity Preserving Stochastic Gradient Method for Composite Optimization

Qihang Lin(qihangl***at***andrew.cmu.edu)
Xi Chen(xichen***at***cs.cmu.edu)
Javier Pena(jfp***at***andrew.cmu.edu)

Abstract: We propose new stochastic gradient algorithms for solving convex composite optimization problems. In each iteration, our algorithms utilize a stochastic oracle of the gradient of the smooth component in the objective function. Our algorithms are based on a stochastic version of the estimate sequence technique introduced by Nesterov (Introductory Lectures on Convex Optimization: A Basic Course, Kluwer, 2003). We establish convergence results for the expectation and variance as well as large deviation properties of the objective value of the iterates generated by our algorithm. When applied to sparse regression problems, our algorithms have the advantage of readily enforcing sparsity at all iterations. We present some numerical experiments on simulated data sets.

Keywords: Stochastic gradient, first-order methods, regularized regression

Category 1: Convex and Nonsmooth Optimization (Convex Optimization )

Citation: Working Paper, Tepper School of Business, Carnegie Mellon University, March 2011.

Download: [PDF]

Entry Submitted: 04/23/2011
Entry Accepted: 04/23/2011
Entry Last Modified: 04/23/2011

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society