-

 

 

 




Optimization Online





 

Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence

Vasileios Charisopoulos(vc333***at***cornell.edu)
Yudong Chen(yudong.chen***at***cornell.edu)
Damek Davis(dsd95***at***cornell.edu)
Mateo Diaz(md825***at***cornell.edu)
Lijun Ding(ld446***at***cornell.edu)
Dmitriy Drusvyatskiy(ddrusv***at***uw.edu)

Abstract: The task of recovering a low-rank matrix from its noisy linear measurements plays a central role in computational science. Smooth formulations of the problem often exhibit an undesirable phenomenon: the condition number, classically defined, scales poorly with the dimension of the ambient space. In contrast, we here show that in a variety of concrete circumstances, nonsmooth penalty formulations do not suffer from the same type of ill-conditioning. Consequently, standard algorithms for nonsmooth optimization, such as subgradient and prox-linear methods, converge at a rapid dimension-independent rate when initialized within constant relative error of the solution. Moreover, nonsmooth formulations are naturally robust against outliers. Our framework subsumes such important computational tasks as phase retrieval, blind deconvolution, quadratic sensing, matrix completion, and robust PCA. Numerical experiments on these problems illustrate the benefits of the proposed approach.

Keywords: low-rank matrix recovery, Gauss-Newton, subgradient method, weak convexity, composite optimization

Category 1: Convex and Nonsmooth Optimization

Citation:

Download: [PDF]

Entry Submitted: 04/23/2019
Entry Accepted: 04/23/2019
Entry Last Modified: 04/23/2019

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society