  


Simple Efficient Solutions for Semidefinite Programming
Henry Wolkowicz (hwolkowiczuwaterloo.ca) Abstract: This paper provides a simple approach for solving a semidefinite program, SDP\@. As is common with many other approaches, we apply a primaldual method that uses the perturbed optimality equations for SDP, $F_\mu(X,y,Z)=0$, where $X,Z$ are $n \times n$ symmetric matrices and $y \in \Re^n$. However, we look at this as an overdetermined system of nonlinear (bilinear) equations on vectors $X,y,Z$ which has a zero residual at optimality. We do not use any symmetrization on this system. That the vectors $X,Z$ are symmetric matrices is ignored. What is different in this paper is a preprocessing step that results in one single, wellconditioned, overdetermined bilinear equation. We do not form a Schur complement form for the linearized system. In addition, asymptotic qquadratic convergence is obtained with a {\em crossover} approach that switches to affine scaling without maintaining the positive (semi)definiteness of $X$ and $Z$. This results in a marked reduction in the number and the complexity of the iterations. We use the long accepted method for nonlinear least squares problems, the GaussNewton method. For large sparse data, we use an {\em inexact GaussNewton} approach with a preconditioned conjugate gradient method applied directly on the linearized equations in matrix space. This does not require any operator formations into a Schur complement system or matrix representations of linear operators. The cost of an iteration consists almost entirely in the solution of a (generally wellconditioned) least squares problem of size $n^2 \times \pmatrix{n+1\cr 2}$. This system consists of one large (sparse) block with $\pmatrix{n\cr 2}$ columns (one CG iteration cost is one sparse matrix multiplication) and one small (dense) block with $n$ columns (one CG iteration cost is one matrix row scaling). Exact primal and dual feasibility are guaranteed throughout the iterations. To illustrate the approach, we apply it to the well known SDP relaxation of the MaxCut problem. This includes the derivation of the {\em optimal diagonal preconditioner}. The numerical tests show that the algorithm exploits sparsity and obtains qquadratic convergence. Keywords: Semidefinite Programming, large sparse problems, inexact GaussNewton method, preconditioned conjugate gradients, optimal diagonal preconditioner, MaxCut Problem. Category 1: Linear, Cone and Semidefinite Programming (Semidefinite Programming ) Category 2: Nonlinear Optimization (Nonlinear Systems and LeastSquares ) Category 3: Combinatorial Optimization Citation: CORR 200149 Department of Combinatorics and Optimization Univerity of Waterloo Sept./01 Download: [Postscript][Compressed Postscript][PDF] Entry Submitted: 09/06/2001 Modify/Update this entry  
Visitors  Authors  More about us  Links  
Subscribe, Unsubscribe Digest Archive Search, Browse the Repository

Submit Update Policies 
Coordinator's Board Classification Scheme Credits Give us feedback 
Optimization Journals, Sites, Societies  