-

 

 

 




Optimization Online





 

Optimal Newton-type methods for nonconvex smooth optimization problems

Coralia Cartis(coralia.cartis***at***ed.ac.uk)
Nicholas I. M. Gould(nick.gould***at***stfc.ac.uk)
Philippe L. Toint (philippe.toint***at***fundp.ac.be)

Abstract: We consider a general class of second-order iterations for unconstrained optimization that includes regularization and trust-region variants of Newton's method. For each method in this class, we exhibit a smooth, bounded-below objective function, whose gradient is globally Lipschitz continuous within an open convex set containing any iterates encountered and whose Hessian is $\alpha-$Holder continuous (for given $\alpha\in [0,1]$) on the path of the iterates, for which the method in question takes at least $\epsilon^{-(2+\alpha)/(1+\alpha)}$ function-evaluations to generate a first iterate whose gradient is smaller than $\epsilon$ in norm. This provides a lower bound on the evaluation complexity of second-order methods in our class when applied to smooth problems satisfying our assumptions. Furthermore, for $\alpha=1$, this lower bound is of the same order in $\epsilon$ as the upper bound on the evaluation complexity of cubic regularization, thus implying cubic regularization has optimal worst-case evaluation complexity within our class of second-order methods.

Keywords: nonlinear optimization, global complexity bounds, optimal algorithms

Category 1: Nonlinear Optimization

Citation: ERGO Technical Report 11-009, School of Mathematics, University of Edinburgh, 2011.

Download: [PDF]

Entry Submitted: 06/22/2011
Entry Accepted: 06/22/2011
Entry Last Modified: 06/22/2011

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society