-

 

 

 




Optimization Online





 

Nearly optimal first-order methods for convex optimization under gradient norm measure: An adaptive regularization approach

Masaru Ito (ito.m***at***math.cst.nihon-u.ac.jp)
Mituhiro Fukuda (mituhiro***at***is.titech.ac.jp)

Abstract: In the development of first-order methods for smooth (resp., composite) convex optimization problems minimizing smooth functions, the gradient (resp., gradient mapping) norm is a fundamental optimality measure for which a regularization technique of first-order methods is known to be nearly optimal. In this paper, we report an adaptive regularization approach attaining this iteration complexity without the prior knowledge of the distance from the initial point to the optimal solution set, which was required to be known in the existing regularization approach. To obtain further faster convergence adaptively, we secondly apply this approach to construct a first-order method that is adaptive to the Hölderian error bound condition (or equivalently, the Lojasiewicz gradient property) which covers moderately wide class of applications. The proposed method attains the nearly optimal iteration complexity with respect to the gradient mapping norm.

Keywords: smooth/composite convex optimization; accelerated proximal gradient methods; Hölderian error bound; adaptive methods

Category 1: Convex and Nonsmooth Optimization (Convex Optimization )

Category 2: Convex and Nonsmooth Optimization (Nonsmooth Optimization )

Citation:

Download: [PDF]

Entry Submitted: 12/27/2019
Entry Accepted: 12/27/2019
Entry Last Modified: 10/05/2020

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society