Optimization Online


Stochastic optimization and sparse statistical recovery: An optimal algorithm for high dimensions

Alekh Agarwal(alekh***at***cs.berkeley.edu)
Sahand Negahban(sahandn***at***mit.edu)
Martin Wainwright(wainwrig***at***stat.berkeley.edu)

Abstract: We develop and analyze stochastic optimization algorithms for problems in which the expected loss is strongly convex, and the optimum is (approximately) sparse. Previous approaches are able to exploit only one of these two structures, yielding an $\order(\pdim/T)$ convergence rate for strongly convex objectives in $\pdim$ dimensions, and an $\order(\sqrt{(\spindex \log \pdim)/T})$ convergence rate when the optimum is $\spindex$-sparse. Our algorithm is based on successively solving a series of $\ell_1$-regularized optimization problems using Nesterov's dual averaging algorithm. We establish that the error of our solution after $T$ iterations is at most $\order((\spindex \log\pdim)/T)$, with natural extensions to approximate sparsity. Our results apply to locally Lipschitz losses including the logistic, exponential, hinge and least-squares losses. By recourse to statistical minimax results, we show that our convergence rates are optimal up to multiplicative constant factors. The effectiveness of our approach is also confirmed in numerical simulations, in which we compare to several baselines on a least-squares regression problem.

Keywords: stochastic optimization, sparsity, strrong convexity

Category 1: Stochastic Programming

Category 2: Convex and Nonsmooth Optimization (Convex Optimization )


Download: [PDF]

Entry Submitted: 07/18/2012
Entry Accepted: 07/18/2012
Entry Last Modified: 07/18/2012

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society