A Sparsity Preserving Stochastic Gradient Method for Composite Optimization
Abstract: We propose new stochastic gradient algorithms for solving convex composite optimization problems. In each iteration, our algorithms utilize a stochastic oracle of the gradient of the smooth component in the objective function. Our algorithms are based on a stochastic version of the estimate sequence technique introduced by Nesterov (Introductory Lectures on Convex Optimization: A Basic Course, Kluwer, 2003). We establish convergence results for the expectation and variance as well as large deviation properties of the objective value of the iterates generated by our algorithm. When applied to sparse regression problems, our algorithms have the advantage of readily enforcing sparsity at all iterations. We present some numerical experiments on simulated data sets.
Keywords: Stochastic gradient, first-order methods, regularized regression
Category 1: Convex and Nonsmooth Optimization (Convex Optimization )
Citation: Working Paper, Tepper School of Business, Carnegie Mellon University, March 2011.
Entry Submitted: 04/23/2011
Modify/Update this entry
|Visitors||Authors||More about us||Links|
Search, Browse the Repository
Give us feedback
|Optimization Journals, Sites, Societies|