-

 

 

 




Optimization Online





 

Stochastic model-based minimization under high-order growth

Damek Davis(dsd95***at***cornell.edu)
Dmitriy Drusvyatskiy(ddrusv***at***uw.edu)
Kellie J. MacPhee(kmacphee***at***uw.edu)

Abstract: Given a nonsmooth, nonconvex minimization problem, we consider algorithms that iteratively sample and minimize stochastic convex models of the objective function. Assuming that the one-sided approximation quality and the variation of the models is controlled by a Bregman divergence, we show that the scheme drives a natural stationarity measure to zero at the rate $O(k^{-1/4})$. Under additional convexity and relative strong convexity assumptions, the function values converge to the minimum at the rate of $O(k^{-1/2})$ and $\widetilde{O}(k^{-1})$, respectively. We discuss consequences for stochastic proximal point, mirror descent, regularized Gauss-Newton, and saddle point algorithms.

Keywords: stochastic model, Bregman proximal point method, Gauss-Newton, mirror descent, saddle-point

Category 1: Convex and Nonsmooth Optimization

Citation:

Download: [PDF]

Entry Submitted: 06/30/2018
Entry Accepted: 07/01/2018
Entry Last Modified: 06/30/2018

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society