- A derivative-free Gauss-Newton method Coralia Cartis(cartismaths.ox.ac.uk) Lindon Roberts(lindon.robertsmaths.ox.ac.uk) Abstract: We present DFO-GN, a derivative-free version of the Gauss-Newton method for solving nonlinear least-squares problems. As is common in derivative-free optimization, DFO-GN uses interpolation of function values to build a model of the objective, which is then used within a trust-region framework to give a globally-convergent algorithm requiring $O(\epsilon^{-2})$ iterations to reach approximate first-order criticality within tolerance $\epsilon$. This algorithm is a simplification of the method from (Zhang, Conn & Scheinberg, SIAM J Opt, 2010), where we replace quadratic models for each residual with linear models. We demonstrate that DFO-GN performs comparably to the method of Zhang et al in terms of objective evaluations, as well as having a substantially faster runtime and improved scalability. Keywords: derivative-free optimization, least-squares, Gauss-Newton method, trust region methods, global convergence, worst-case complexity. Category 1: Nonlinear Optimization Citation: Technical report, Numerical Analysis Group, Mathematical Institute, University of Oxford, 2017. Download: [PDF]Entry Submitted: 10/30/2017Entry Accepted: 10/30/2017Entry Last Modified: 10/30/2017Modify/Update this entry Visitors Authors More about us Links Subscribe, Unsubscribe Digest Archive Search, Browse the Repository Submit Update Policies Coordinator's Board Classification Scheme Credits Give us feedback Optimization Journals, Sites, Societies Optimization Online is supported by the Mathematical Optmization Society.