- | ||||
|
![]()
|
Tail bounds for stochastic approximation
Michael P. Friedlander(mpf Abstract: Stochastic-approximation gradient methods are attractive for large-scale convex optimization because they offer inexpensive iterations. They are especially popular in data-fitting and machine-learning applications where the data arrives in a continuous stream, or it is necessary to minimize large sums of functions. It is known that by appropriately decreasing the variance of the error at each iteration, the expected rate of convergence matches that of the underlying deterministic gradient method. Here we give conditions under which this happens with overwhelming probability. Keywords: stochastic approximation, sample-average approximation, incremental gradient, convex optimization Category 1: Convex and Nonsmooth Optimization (Convex Optimization ) Category 2: Stochastic Programming Citation: Department of Computer Science, University of British Columbia, April 2013 Download: [PDF] Entry Submitted: 04/19/2013 Modify/Update this entry | ||
Visitors | Authors | More about us | Links | |
Subscribe, Unsubscribe Digest Archive Search, Browse the Repository
|
Submit Update Policies |
Coordinator's Board Classification Scheme Credits Give us feedback |
Optimization Journals, Sites, Societies | |
![]() |