-

 

 

 




Optimization Online





 

Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria

D. Drusvyatskiy (ddrusv***at***uw.edu)
A.D. Ioffe (ioffe***at***tx.technion.ac.il)
A.S. Lewis (adrian.lewis***at***cornell.edu)

Abstract: We consider optimization algorithms that successively minimize simple Taylor-like models of the objective function. Methods of Gauss-Newton type for minimizing the composition of a convex function and a smooth map are common examples. Our main result is an explicit relationship between the step-size of any such algorithm and the slope of the function at a nearby point. Consequently, we (1) show that the step-sizes can be reliably used to terminate the algorithm, (2) prove that as long as the step-sizes tend to zero, every limit point of the iterates is stationary, and (3) show that conditions, akin to classical quadratic growth, imply that the step-sizes linearly bound the distance of the iterates to the solution set. The latter so-called error bound property is typically used to establish linear (or faster) convergence guarantees. Analogous results hold when the step-size is replaced by the square root of the decrease in the model's value. We complete the paper with extensions to when the models are minimized only inexactly.

Keywords: Taylor-like model, error-bound, slope, subregularity, Ekeland's principle

Category 1: Convex and Nonsmooth Optimization (Nonsmooth Optimization )

Citation: 9/30/2016

Download: [PDF]

Entry Submitted: 09/30/2016
Entry Accepted: 09/30/2016
Entry Last Modified: 10/11/2016

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society