Optimization Online


On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization

Coralia Cartis (coralia.cartis***at***ed.ac.uk)
Nick Gould (nick.gould***at***stfc.ac.uk)
Philippe Toint (philippe.toint***at***fundp.ac.be)

Abstract: It is shown that the steepest descent and Newton's method for unconstrained nonconvex optimization under standard assumptions may be both require a number of iterations and function evaluations arbitrarily close to O(epsilon^{-2}) to drive the norm of the gradient below epsilon. This shows that the upper bound of O(epsilon^{-2}) evaluations known for the steepest descent is tight, and that Newton's method may be as slow as steepest descent in the worst case. The improved evaluation complexity bound of O(epsilon^{-3/2}) evaluations known for cubically-regularised Newton methods is also shown to be tight.

Keywords: Complexity, nonconvex unconstrained optimization, steepest descent, Newton's method

Category 1: Nonlinear Optimization (Unconstrained Optimization )

Citation: Report 09/14, Department of Mathematics, FUNDP-University of Namur, Namur, Belgium

Download: [PDF]

Entry Submitted: 10/16/2009
Entry Accepted: 10/16/2009
Entry Last Modified: 10/16/2009

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Programming Society