Two Numerical Methods for Optimizing Matrix Stability

Consider the affine matrix family $A(x) = A_0 + \sum_{k=1}^m x_k A_k$, mapping a design vector $x\in\Rl^m$ into the space of $n \times n$ real matrices. Consider the affine matrix family $A(x) = A_0 + \sum_{k=1}^m x_k A_k$, mapping a design vector $x\in\Rl^m$ into the space of $n \times n$ real matrices. We are interested in the question of how to choose $x$ to optimize the stability of the dynamical system $\dot z = A(x)z$. A classic example in control is stabilization by output feedback. We take two approaches. The first is to directly minimize $\alpha(A(x))$, the spectral abscissa (the largest real part of the eigenvalues) of $A(x)$, since this quantity bounds the asymptotic decay rate of the trajectories of the dynamical system. The spectral abscissa $\alpha(X)$ is a continuous but nonsmooth, in fact non-Lipschitz, function of the matrix argument $X$, and finding a global minimizer of $\alpha(A(x))$ is difficult. We introduce a novel random gradient bundle method for approximating \emph{local} minimizers, motivated by recent work on nonsmooth analysis of the function $\alpha(X)$. Our second approach is to minimize a related function $\rsa(A(x))$, where $\delta$ is a \emph{robustness} parameter in (0,1). The motivation for the definition of the ``robust spectral abscissa

Citation

Submitted to Lin. Alg. Appl.

Article

Download

View Two Numerical Methods for Optimizing Matrix Stability