Optimization Online


Optimization Methods for Large-Scale Machine Learning

Léon Bottou(leon***at***bottou.org)
Frank E. Curtis(frank.e.curtis***at***gmail.com)
Jorge Nocedal(j-nocedal***at***northwestern.edu)

Abstract: This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.

Keywords: numerical optimization, machine learning, stochastic gradient methods, algorithm complexity analysis, noise reduction methods, second-order methods

Category 1: Nonlinear Optimization

Category 2: Convex and Nonsmooth Optimization

Category 3: Stochastic Programming

Citation: Submitted to SIAM Review

Download: [PDF]

Entry Submitted: 06/15/2016
Entry Accepted: 06/15/2016
Entry Last Modified: 06/15/2016

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society