-

 

 

 




Optimization Online





 

A subspace minimization method for the trust-region step

Jennifer B. Erway (erwayjb***at***wfu.edu)
Philip E. Gill (pgill***at***ucsd.edu)

Abstract: We consider methods for large-scale unconstrained minimization based on finding an approximate minimizer of a quadratic function subject to a two-norm trust-region constraint. The Steihaug-Toint method uses the conjugate-gradient (CG) algorithm to minimize the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the constraint boundary. However, if the CG method is used with a preconditioner, the Steihaug-Toint method requires that the trust-region norm be defined in terms of the preconditioning matrix. If a different preconditioner is used for each subproblem, the shape of the trust-region can change substantially from one subproblem to the next, which invalidates many of the assumptions on which standard methods for adjusting the trust-region radius are based. In this paper we propose a method that allows the trust-region norm to be defined independently of the preconditioner. The method solves the inequality constrained trust-region subproblem over a sequence of evolving low-dimensional subspaces. Each subspace includes an accelerator direction defined by a regularized Newton method for satisfying the optimality conditions of a primal-dual interior method. A crucial property of this direction is that it can be computed by applying the preconditioned CG method to a positive-definite system in both the primal and dual variables of the trust-region subproblem. Numerical experiments on problems from the CUTEr test collection indicate that the method can require significantly fewer function evaluations than other methods. In addition, experiments with general-purpose preconditioners show that it is possible to significantly reduce the number of matrix-vector products relative to those required without preconditioning.

Keywords: Large-scale unconstrained optimization, trust-region methods, conjugate-gradient methods, Krylov methods, preconditioning

Category 1: Nonlinear Optimization

Category 2: Nonlinear Optimization (Unconstrained Optimization )

Citation: Numerical Analysis Report 08-1, Department of Mathematics, University of California San Diego, La Jolla, CA, 92093-0112; Technical Report 2008-2, Department of Mathematics, Wake Forest University, Winston-Salem, NC, 2008

Download: [PDF]

Entry Submitted: 03/19/2008
Entry Accepted: 03/19/2008
Entry Last Modified: 05/13/2009

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Programming Society