Optimization Online


Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Renato DC Monteiro (monteiro***at***isye.gatech.edu)
Benar F Svaiter (benar***at***impa.br)

Abstract: In this paper, we consider a class of inexact proximal point methods for convex optimization which allows a relative error tolerance in the approximate solution of each proximal subproblem. By exploiting the special structure of convex optimization problems, we are able to derive surprising complexity bounds for the aforementioned class. As a consequence, we show that the best size of the projected gradients (resp., gradients) generated by the projected gradient (resp., steepest descent) method up to iteration $k$ is ${\cal O}(1/k)$ in the context of smooth convex optimization problems.

Keywords: projected gradient, forward-backward splitting , inexact proximal point, convex optimization, steepest descent, complexity

Category 1: Nonlinear Optimization

Category 2: Convex and Nonsmooth Optimization


Download: [PDF]

Entry Submitted: 08/24/2010
Entry Accepted: 08/24/2010
Entry Last Modified: 05/26/2012

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society