Optimization Online


Stochastic Primal-Dual Method for Empirical Risk Minimization with O(1) Per-Iteration Complexity

Conghui Tan(chtan***at***se.cuhk.edu.hk )
Tong Zhang(tongzhang***at***tongzhang-ml.org)
Shiqian Ma(sqma***at***math.ucdavis.edu)
Ji Liu(ji.liu.uwisc***at***gmail.com)

Abstract: Regularized empirical risk minimization problem with linear predictor appears frequently in machine learning. In this paper, we propose a new stochastic primal-dual method to solve this class of problems. Different from existing methods, our proposed methods only require O(1) operations in each iteration. We also develop a variance-reduction variant of the algorithm that converges linearly. Numerical experiments suggest that our methods are faster than existing ones such as proximal SGD, SVRG and SAGA on high-dimensional problems.


Category 1: Convex and Nonsmooth Optimization


Download: [PDF]

Entry Submitted: 11/02/2018
Entry Accepted: 11/03/2018
Entry Last Modified: 11/02/2018

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society