Optimization Online


A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization

Ching-pei Lee(ching-pei***at***cs.wisc.edu)
Cong Han Lim(clim9***at***wisc.edu)
Stephen Wright(swright***at***cs.wisc.edu)

Abstract: In this paper, we propose a communication- and computation-efficient distributed optimization algorithm using second-order information for solving ERM problems with a non-smooth regularization term. Current second-order and quasi-Newton methods for this problem either do not work well in the distributed setting or work only for specific regularizers. Our algorithm uses successive quadratic approximations, and we describe how to maintain an approximation of the Hessian and solve subproblems efficiently in a distributed manner. The proposed method enjoys global linear convergence for a broad range of non-strongly convex problems that includes the most commonly used ERMs, thus requiring lower communication complexity. Empirical results also demonstrate that our method significantly improves on communication cost and running time over the current state-of-the-art methods.

Keywords: Distributed optimization, Empirical risk minimization, Non- smooth optimization, Regularized optimization, Variable metrics, Quasi-Newton, Proximal method, Inexact method

Category 1: Optimization Software and Modeling Systems (Parallel Algorithms )

Category 2: Applications -- Science and Engineering (Data-Mining )

Category 3: Nonlinear Optimization


Download: [PDF]

Entry Submitted: 03/04/2018
Entry Accepted: 03/04/2018
Entry Last Modified: 03/04/2018

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society