Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

In this paper, we consider a class of inexact proximal point methods for convex optimization which allows a relative error tolerance in the approximate solution of each proximal subproblem. By exploiting the special structure of convex optimization problems, we are able to derive surprising complexity bounds for the aforementioned class. As a consequence, we show that the best size of the projected gradients (resp., gradients) generated by the projected gradient (resp., steepest descent) method up to iteration $k$ is ${\cal O}(1/k)$ in the context of smooth convex optimization problems.

Article

Download

View Convergence rate of inexact proximal point methods with relative error criteria for convex optimization