-

 

 

 




Optimization Online





 

Universal regularization methods - varying the power, the smoothness and the accuracy

Coralia Cartis(coralia.cartis***at***maths.ox.ac.uk)
Nick I M Gould(nick.gould***at***stfc.ac.uk)
Philippe L Toint(philippe.toint***at***unamur.be)

Abstract: Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods, that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied to convexly-constrained optimization problems. We investigate the worst-case evaluation complexity/global rate of convergence of these algorithms, when the level of sufficient smoothness of the objective may be unknown or may even be absent. We find that the methods accurately reflect in their complexity the degree of smoothness of the objective and satisfy increasingly better bounds with improving accuracy of the models. The bounds vary continuously and robustly with respect to the regularization power and accuracy of the model and the degree of smoothness of the objective.

Keywords: evaluation complexity, worst-case analysis, regularization methods

Category 1: Nonlinear Optimization

Citation: Numerical Analysis Technical Report, Mathematical Institute, University of Oxford, 2016.

Download: [PDF]

Entry Submitted: 12/02/2016
Entry Accepted: 12/02/2016
Entry Last Modified: 12/02/2016

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society