Optimization Online


Scalable Nonlinear Programming Via Exact Differentiable Penalty Functions and Trust-Region Newton Methods

Victor M Zavala (vzavala***at***mcs.anl.gov)
Mihai Anitescu (anitescu***at***mcs.anl.gov)

Abstract: We present an approach for nonlinear programming (NLP) based on the direct minimization of an exact di erentiable penalty function using trust-region Newton techniques. As opposed to existing algorithmic approaches to NLP, the approach provides all the features required for scalability: it can eciently detect and exploit directions of negative curvature, it is superlinearly convergent, and it enables the scalable computation of the Newton step through iterative linear algebra. Moreover, it presents features that are desirable for parametric optimization problems that have to be solved in a latency-limited environment, as is the case for model predictive control and mixed-integer nonlinear programming. These features are fast detection of activity, ecient warm-starting, and progress on a primal-dual merit function at every iteration. We derive general convergence results for our approach and demonstrate its behavior through numerical studies.

Keywords: scalable, nonlinear programming, exact differentiable penalty, trust-region, Newton, iterative

Category 1: Nonlinear Optimization

Category 2: Applications -- Science and Engineering (Control Applications )


Download: [PDF]

Entry Submitted: 08/20/2012
Entry Accepted: 08/20/2012
Entry Last Modified: 09/05/2012

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society