Smooth Optimization Approach for Covariance Selection

In this paper we study a smooth optimization approach for solving a class of non-smooth {\it strongly} concave maximization problems. In particular, we apply Nesterov's smooth optimization technique \cite{Nest83-1,Nest05-1} to their dual counterparts that are smooth convex problems. It is shown that the resulting approach has $\cO(1/{\sqrt{\epsilon}})$ iteration complexity for finding an $\epsilon$-optimal solution to both primal and dual problems. We then discuss the application of this approach to covariance selection that is approximately solved as a \textit{L1-norm} penalized maximum likelihood estimation problem, and also propose a variant of this approach which has substantially outperformed the latter one in our computational experiments. We finally compare the performance of these approaches with two other first-order methods studied in \cite{DaOnEl06}, that is, Nesterov's $\cO(1/\epsilon)$ smooth approximation scheme, and block-coordinate descent method for covariance selection on a set of randomly generated instances. It shows that our smooth optimization approach substantially outperforms their first method, and moreover, its variant tremendously outperforms their both methods.

Citation

Working Paper, Department of Mathematics, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada, June 2007.

Article

Download

View Smooth Optimization Approach for Covariance Selection