A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees
Frank E. Curtis (frank.e.curtisgmail.com)
Abstract: A line search algorithm for minimizing nonconvex and/or nonsmooth objective functions is presented. The algorithm is a hybrid between a standard Broyden–Fletcher–Goldfarb–Shanno (BFGS) and an adaptive gradient sampling (GS) method. The BFGS strategy is employed because it typically yields fast convergence to the vicinity of a stationary point, and together with the adaptive GS strategy the algorithm ensures that convergence will continue to such a point. Under suitable assumptions, it is proved that the algorithm converges globally with probability one. The algorithm has been implemented in C++ and the results of numerical experiments illustrate the efficacy of the proposed approach.
Keywords: nonsmooth optimization, nonconvex optimization, unconstrained optimization, quasi-Newton methods, gradient sampling, line search methods
Category 1: Nonlinear Optimization
Category 2: Nonlinear Optimization (Unconstrained Optimization )
Category 3: Convex and Nonsmooth Optimization (Nonsmooth Optimization )
Citation: F. E. Curtis and X. Que. A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees. Mathematical Programming Computation, 7(4):399-428, 2015.
Entry Submitted: 05/31/2014
Modify/Update this entry
|Visitors||Authors||More about us||Links|
Search, Browse the Repository
Give us feedback
|Optimization Journals, Sites, Societies|