-

 

 

 




Optimization Online





 

Worst-Case Hardness of Approximation for Sparse Optimization with L0 Norm

Yichen Chen (yichenc***at***cs.princeton.edu)
Mengdi Wang (mengdiw***at***princeton.edu)

Abstract: In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomial-time algorithm. Applying the complexity result to sparse linear regression reveals a gap between computational accuracy and statistical accuracy: It is intractable to approximate the estimator within constant factors of its statistical error. We also show that differentiating between the best k-sparse solution and the best (k + 1)-sparse solution is computationally hard. It suggests that tuning the sparsity level is hard.

Keywords: complexity, approximation hard, NP-hard, sparsity, L0 norm

Category 1: Nonlinear Optimization

Category 2: Combinatorial Optimization (Other )

Citation:

Download: [PDF]

Entry Submitted: 02/22/2016
Entry Accepted: 02/22/2016
Entry Last Modified: 03/29/2016

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society