-

 

 

 




Optimization Online





 

CONVERGENCE RATE OF GRADIENT BASED ADAPTIVE RESTART FOR ACCELERATED GRADIENT SCHEMES

Can Kizilkale(kizilkalecan***at***gmail.com)
Shivkumar Chandrasekaran(shiv***at***ece.ucsb.edu)
Ming Gu (mgu***at***berkeley.edu)

Abstract: The accelerated gradient algorithm is known to have non-monotonic, periodic convergence behavior in the high momentum regime. If important function parameters like the condition number are known, the momentum can be adjusted to get linear convergence. Unfortunately these parameters are usually not accessible, so instead heuristics are used for deciding when to restart. One of the most intuitive and well known heuristics is to look at the inner product of the momentum and gradient vector and restart when this inner product is positive. In this paper we prove that the convergence rate of this adaptive restarting heuristic is linear under strong convexity.

Keywords: Accelerated gradient, restart, convex optimization, strong convexity

Category 1: Convex and Nonsmooth Optimization

Citation:

Download: [PDF]

Entry Submitted: 10/11/2017
Entry Accepted: 10/11/2017
Entry Last Modified: 10/11/2017

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society