-

 

 

 




Optimization Online





 

Behavior of accelerated gradient methods near critical points of nonconvex functions

Michael O'Neill (moneill***at***cs.wisc.edu)
Stephen Wright (swright***at***cs.wisc.edu)

Abstract: We examine the behavior of accelerated gradient methods in smooth nonconvex unconstrained optimization, focusing in particular on their behavior near strict saddle points. Accelerated methods are iterative methods that typically step along a direction that is a linear combination of the previous step and the gradient of the function evaluated at a point at or near the current iterate. (The previous step encodes gradient information from earlier stages in the iterative process.) We show by means of the stable manifold theorem that the heavy-ball method method is unlikely to converge to strict saddle points, which are points at which the gradient of the objective is zero but the Hessian has at least one negative eigenvalue. We then examine the behavior of the heavy-ball method and Nesterov's accelerated gradient method % \cite{Nes83} in the vicinity of a strict saddle point of a nonconvex quadratic function, showing that both methods can diverge from this point more rapidly than the steepest descent method.

Keywords: Accelerated Gradient Methods, Nonconvex Optimization

Category 1: Nonlinear Optimization (Unconstrained Optimization )

Citation: Technical Report, University of Wisconsin-Madison, June 2017. Revised November, 2017.

Download: [PDF]

Entry Submitted: 06/29/2017
Entry Accepted: 06/29/2017
Entry Last Modified: 11/27/2017

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society