Optimization Online


A Robust Multi-Batch L-BFGS Method for Machine Learning

Albert S Berahas (albertberahas***at***u.northwestern.edu)
Martin Takáč (takac.mt***at***gmail.com)

Abstract: This paper describes an implementation of the L-BFGS method designed to deal with two adversarial situations. The first occurs in distributed computing environments where some of the computational nodes devoted to the evaluation of the function and gradient are unable to return results on time. A similar challenge occurs in a multi-batch approach in which the data points used to compute function and gradients are purposely changed at each iteration to accelerate the learning process. Difficulties arise because L-BFGS employs gradient differences to update the Hessian approximations, and when these gradients are computed using different data points the updating process can be unstable. This paper shows how to perform stable quasi-Newton updating in the multi-batch setting, studies its convergence properties for both the convex and nonconvex cases, and illustrates the behavior of the algorithm in a distributed computing platform on binary classification logistic regression and neural network training problems that arise in machine learning.

Keywords: L-BFGS, multi-batch, fault-tolerant, consistency, sample selection, machine learning

Category 1: Nonlinear Optimization (Unconstrained Optimization )

Category 2: Convex and Nonsmooth Optimization (Convex Optimization )


Download: [PDF]

Entry Submitted: 07/25/2017
Entry Accepted: 07/25/2017
Entry Last Modified: 03/31/2019

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society