Robust stochastic optimization with the proximal point method

Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. In this work, we show that a wide class of such algorithms on strongly convex problems can be augmented with sub-exponential confidence bounds at an overhead cost that is only polylogarithmic in the condition number and the confidence level. We discuss consequences both for streaming and offline algorithms.

Citation

07/2019

Article

Download

View Robust stochastic optimization with the proximal point method