A stochastic Levenberg-Marquardt method using random models with complexity results and application to data assimilation

Globally convergent variants of the Gauss-Newton algorithm are often the methods of choice to tackle nonlinear least-squares problems. Among such frameworks, Levenberg-Marquardt and trust-region methods are two well-established, similar paradigms. Both schemes have been studied when the Gauss-Newton model is replaced by a random model that is only accurate with a given probability. Trust-region schemes have also been applied to problems where the objective value is subject to noise: this setting is of particular interest in fields such as data assimilation, where efficient methods that can adapt to noise are needed to account for the intrinsic uncertainty in the input data. In this paper, we describe a stochastic Levenberg-Marquardt algorithm that handles noisy objective function values and random models, provided sufficient accuracy is achieved in probability. Our method relies on a specific scaling of the regularization parameter, that allows us to leverage existing results for trust-region algorithms. Moreover, we exploit the structure of our objective through the use of a family of stationarity criteria tailored to least-squares problems. Providedthe probability of accurate function estimates and models is sufficiently large, we bound the expected number of iterations needed to reach an approximate stationary point, which generalizes results based on using deterministic models or noiseless function values. We illustrate the link between our approach and several applications related to inverse problems and machine learning.

Citation

To appear in SIAM/ASA J. Uncertain. Quantif., 2021

Article

Download

View A stochastic Levenberg-Marquardt method using random models with complexity results and application to data assimilation