-

 

 

 




Optimization Online





 

Regularization vs. Relaxation: A conic optimization perspective of statistical variable selection

Hongbo Dong (hongbo.dong***at***wsu.edu)
Kun Chen (kun.chen***at***uconn.edu)
Jeff Linderoth (linderoth***at***wisc.edu)

Abstract: Variable selection is a fundamental task in statistical data analysis. Sparsity-inducing regularization methods are a popular class of methods that simultaneously perform variable selection and model estimation. The central problem is a quadratic optimization problem with an $\ell_0$-norm penalty. Exactly enforcing the $\ell_0$-norm penalty is computationally intractable for larger scale problems, so different sparsity-inducing penalty functions that approximate the $\ell_0$-norm have been introduced. In this paper, we show that viewing the problem from a convex relaxation perspective offers new insights. In particular, we show that a popular sparsity-inducing concave penalty function known as the Minimax Concave Penalty (MCP), and the reverse Huber penalty derived in a recent work by Pilanci, Wainwright and Ghaoui, can both be derived as special cases of a lifted convex relaxation called the perspective relaxation. The optimal perspective relaxation is a related minimax problem that balances the overall convexity and tightness of approximation to the $\ell_0$ norm. We show it can be solved by a semidefinite relaxation. Moreover, a probabilistic interpretation of the semidefinite relaxation reveals connections with the boolean quadric polytope in combinatorial optimization. Finally by reformulating the $\ell_0$-norm penalized problem as a two-level problem, with the inner level being a Max-Cut problem, our proposed semidefinite relaxation can be realized by replacing the inner level problem with its semidefinite relaxation studied by Goemans and Williamson. This interpretation suggests using the Goemans-Williamson rounding procedure to find approximate solutions to the $\ell_0$-norm penalized problem. Numerical experiments demonstrate the tightness of our proposed semidefinite relaxation, and the effectiveness of finding approximate solutions by Goemans-Williamson rounding.

Keywords: conic optimization, variable selection, concave penalty function, sparse linear regression

Category 1: Linear, Cone and Semidefinite Programming

Category 2: Integer Programming ((Mixed) Integer Nonlinear Programming )

Category 3: Applications -- Science and Engineering (Statistics )

Citation: Working paper, Department of Mathematics, Washington State University, Pullman, WA 99164

Download: [PDF]

Entry Submitted: 05/28/2015
Entry Accepted: 05/28/2015
Entry Last Modified: 10/17/2015

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society