Tail bounds for stochastic approximation

Stochastic-approximation gradient methods are attractive for large-scale convex optimization because they offer inexpensive iterations. They are especially popular in data-fitting and machine-learning applications where the data arrives in a continuous stream, or it is necessary to minimize large sums of functions. It is known that by appropriately decreasing the variance of the error at each iteration, the expected rate of convergence matches that of the underlying deterministic gradient method. Here we give conditions under which this happens with overwhelming probability.

Citation

Department of Computer Science, University of British Columbia, April 2013

Article

Download

View Tail bounds for stochastic approximation