-

 

 

 




Optimization Online





 

Adaptive Fista

Peter Ochs(ochs***at***math.uni-sb.de)
Thomas Pock(pock***at***icg.tugraz.at)

Abstract: In this paper we propose an adaptively extrapolated proximal gradient method, which is based on the accelerated proximal gradient method (also known as FISTA), however we locally optimize the extrapolation parameter by carrying out an exact (or inexact) line search. It turns out that in some situations, the proposed algorithm is equivalent to a class of SR1 (identity minus rank 1) proximal quasi-Newton methods. Convergence is proved in a general non-convex setting, and hence, as a byproduct, we also obtain new convergence guarantees for proximal quasi-Newton methods. In case of convex problems, we can devise hybrid algorithms that enjoy the classical O(1/k^2)-convergence rate of accelerated proximal gradient methods. The efficiency of the new method is shown on several classical optimization problems.

Keywords: FISTA, proximal algorithm, proximal quasi-Newton, SR1 quasi-Newton

Category 1: Convex and Nonsmooth Optimization

Citation: arXiv:1711.04343

Download: [PDF]

Entry Submitted: 11/15/2017
Entry Accepted: 11/15/2017
Entry Last Modified: 11/15/2017

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society