Optimization Online


On the local convergence analysis of the Gradient Sampling method

Elias Salomão Helou(elias***at***icmc.usp.br)
Sandra A. Santos(sandra***at***ime.unicamp.br)
Lucas E. A. Simões(simoes.lea***at***gmail.com)

Abstract: The Gradient Sampling method is a recently developed tool for solving unconstrained nonsmooth optimization problems. Using just first order information about the objective function, it generalizes the steepest descent method, one of the most classical methods to minimize a smooth function. This manuscript aims at determining under which circumstances one can expect the same local convergence result of the Cauchy method for the Gradient Sampling algorithm. Additionally, at the end of this study, we show how to practically accomplish the required hypotheses during the execution of the algorithm.

Keywords: nonsmooth nonconvex optimization; gradient sampling; local convergence; unconstrained minimization

Category 1: Nonlinear Optimization

Category 2: Convex and Nonsmooth Optimization


Download: [PDF]

Entry Submitted: 10/25/2016
Entry Accepted: 10/25/2016
Entry Last Modified: 10/25/2016

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society