Adaptive Fista

In this paper we propose an adaptively extrapolated proximal gradient method, which is based on the accelerated proximal gradient method (also known as FISTA), however we locally optimize the extrapolation parameter by carrying out an exact (or inexact) line search. It turns out that in some situations, the proposed algorithm is equivalent to a class of SR1 (identity minus rank 1) proximal quasi-Newton methods. Convergence is proved in a general non-convex setting, and hence, as a byproduct, we also obtain new convergence guarantees for proximal quasi-Newton methods. In case of convex problems, we can devise hybrid algorithms that enjoy the classical O(1/k^2)-convergence rate of accelerated proximal gradient methods. The efficiency of the new method is shown on several classical optimization problems.

Citation

arXiv:1711.04343

Article

Download

View Adaptive Fista