Optimization Online


A derivative-free Gauss-Newton method

Coralia Cartis(cartis***at***maths.ox.ac.uk)
Lindon Roberts(lindon.roberts***at***maths.ox.ac.uk)

Abstract: We present DFO-GN, a derivative-free version of the Gauss-Newton method for solving nonlinear least-squares problems. As is common in derivative-free optimization, DFO-GN uses interpolation of function values to build a model of the objective, which is then used within a trust-region framework to give a globally-convergent algorithm requiring $O(\epsilon^{-2})$ iterations to reach approximate first-order criticality within tolerance $\epsilon$. This algorithm is a simplification of the method from (Zhang, Conn & Scheinberg, SIAM J Opt, 2010), where we replace quadratic models for each residual with linear models. We demonstrate that DFO-GN performs comparably to the method of Zhang et al in terms of objective evaluations, as well as having a substantially faster runtime and improved scalability.

Keywords: derivative-free optimization, least-squares, Gauss-Newton method, trust region methods, global convergence, worst-case complexity.

Category 1: Nonlinear Optimization

Citation: Technical report, Numerical Analysis Group, Mathematical Institute, University of Oxford, 2017.

Download: [PDF]

Entry Submitted: 10/30/2017
Entry Accepted: 10/30/2017
Entry Last Modified: 10/30/2017

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society