  


Nonsmooth optimization using Taylorlike models: error bounds, convergence, and termination criteria
D. Drusvyatskiy (ddrusvuw.edu) Abstract: We consider optimization algorithms that successively minimize simple Taylorlike models of the objective function. Methods of GaussNewton type for minimizing the composition of a convex function and a smooth map are common examples. Our main result is an explicit relationship between the stepsize of any such algorithm and the slope of the function at a nearby point. Consequently, we (1) show that the stepsizes can be reliably used to terminate the algorithm, (2) prove that as long as the stepsizes tend to zero, every limit point of the iterates is stationary, and (3) show that conditions, akin to classical quadratic growth, imply that the stepsizes linearly bound the distance of the iterates to the solution set. The latter socalled error bound property is typically used to establish linear (or faster) convergence guarantees. Analogous results hold when the stepsize is replaced by the square root of the decrease in the model's value. We complete the paper with extensions to when the models are minimized only inexactly. Keywords: Taylorlike model, errorbound, slope, subregularity, Ekeland's principle Category 1: Convex and Nonsmooth Optimization (Nonsmooth Optimization ) Citation: 9/30/2016 Download: [PDF] Entry Submitted: 09/30/2016 Modify/Update this entry  
Visitors  Authors  More about us  Links  
Subscribe, Unsubscribe Digest Archive Search, Browse the Repository

Submit Update Policies 
Coordinator's Board Classification Scheme Credits Give us feedback 
Optimization Journals, Sites, Societies  