Quasi-Newton methods for large-scale distributed parameter estimation

We develop Quasi-Newton methods for distributed parameter estimation problems, where the forward problem is governed by a set of partial differential equations. A Tikhonov style regularization approach yields an optimization problem with a special structure, where the gradients are calculated using the adjoint method. In many cases standard Quasi-Newton methods (such as L-BFGS) are not very effective and tend to converge slowly. Taking advantage of the special structure of the problem and the quantities that are calculated in typical gradient descent methods we develop a class of highly effective methods for the solution of the problem. We demonstrate the merits and effectiveness of our algorithm on two realistic model problems.

Citation

Dept of Math/CS Emory University 04/03

Article

Download

View Quasi-Newton methods for large-scale distributed parameter estimation