Dynamic Optimization with Convergence Guarantees

We present a novel direct transcription method to solve optimization problems subject to nonlinear differential and inequality constraints. In order to provide numerical convergence guarantees, it is sufficient for the functions that define the problem to satisfy boundedness and Lipschitz conditions. Our assumptions are the most general to date; we do not require uniqueness, differentiability or constraint qualifications to hold and we avoid the use of Lagrange multipliers. Our approach differs fundamentally from state-of-the-art methods based on collocation. We follow a least-squares approach to finding approximate solutions to the differential equations. The objective is augmented with the integral of a quadratic penalty on the differential equation residual and a logarithmic barrier for the inequality constraints, as well as a quadratic penalty on the point constraint residual. The resulting unconstrained infinite-dimensional optimization problem is discretized using finite elements, while integrals are replaced by quadrature approximations if they cannot be evaluated analytically. Order of convergence results are derived, even if components of solutions are discontinuous.

Citation

October 2018

Article

Download

View Dynamic Optimization with Convergence Guarantees