-

 

 

 




Optimization Online





 

Convergence rate estimates for the gradient differential inclusion

Osman Guler (guler***at***math.umbc.edu)

Abstract: Let f be a lower semi-continuous convex function in a Euclidean space, finite or infinite dimensional. The gradient differential inclusion is a generalized differential equation which requires that -x'(t) be in the subgradient of f at x. It is the continuous versions of the gradient descent method for minimizing f in case f is differentiable, and the proximal point method in the general case. There is a remarkable similarity between the behavior of the inclusion and its discrete counterparts as well in the methods used in both cases. As a simple consequence of our previous results on the proximal point method, we prove global convergence rate estimates for the optimality gap f(x(t))-min f=O(1/t) or o(1/t).

Keywords: steepest descent, proximal-point method, convergence rate, non-linear semigroups.

Category 1: Convex and Nonsmooth Optimization (Convex Optimization )

Category 2: Convex and Nonsmooth Optimization (Nonsmooth Optimization )

Citation: Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD 21250. November, 2003.

Download: [Postscript][PDF]

Entry Submitted: 12/02/2003
Entry Accepted: 12/02/2003
Entry Last Modified: 12/02/2003

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Programming Society