-

 

 

 




Optimization Online





 

A unified analysis of descent sequences in weakly convex optimization, including convergence rates for bundle methods

Felipe Atenas(f262900***at***dac.unicamp.br)
Claudia Sagastizábal(sagastiz***at***unicamp.br)
Paulo J. S. Silva(pjssilva***at***unicamp.br)
Mikhail Solodov(solodov***at***impa.br)

Abstract: We present a framework for analyzing convergence and local rates of convergence of a class of descent algorithms, assuming the objective function is weakly convex. The framework is general, in the sense that it combines the possibility of explicit iterations (based on the gradient or a subgradient at the current iterate), implicit iterations (using a subgradient at the next iteration, like in the proximal schemes), as well as iterations when the associated subgradient is specially constructed and does not correspond neither to the current nor the next point (this is the case of descent steps in bundle methods). Under the subdifferential-based error bound on the distance to critical points, linear rates of convergence are established. Our analysis applies, among other techniques, to prox-descent for decomposable functions, the proximal-gradient method for a sum of functions, redistributed bundle methods, and a class of algorithms that can be cast in the feasible descent framework for constrained optimization.

Keywords: weak convexity, descent methods, bundle methods, model-based methods, proximal descent, proximal gradient method, error bound, linear convergence

Category 1: Nonlinear Optimization

Category 2: Complementarity and Variational Inequalities

Citation:

Download: [PDF]

Entry Submitted: 12/15/2021
Entry Accepted: 12/15/2021
Entry Last Modified: 12/15/2021

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society