-

 

 

 




Optimization Online





 

A Stochastic Quasi-Newton Method for Large-Scale Optimization

Richard Byrd (richardhbyrd***at***gmail.com)
Samantha Hansen (samanthahansen2014***at***u.northwestern.edu)
Jorge Nocedal (nocedal***at***eecs.northwestern.edu)
Yoram Singer (yoram.singer***at***gmail.com)

Abstract: Abstract The question of how to incorporate curvature information in stochastic approximation methods is challenging. The direct application of classical quasi- Newton updating techniques for deterministic optimization leads to noisy curvature estimates that have harmful effects on the robustness of the iteration. In this paper, we propose a stochastic quasi-Newton method that is efficient, robust and scalable. It employs the classical BFGS update formula in its limited memory form, and is based on the observation that it is beneficial to collect curvature information pointwise, and at regular intervals, through (sub-sampled) Hessian-vector products. This technique differs from the classical approach that would compute differences of gradients, and where controlling the quality of the curvature estimates can be difficult. We present numerical results on problems arising in machine learning that suggest that the proposed method shows much promise.

Keywords: stochastic optimization, quasi-Newton method, machine learning, Limited memory BFGS

Category 1: Stochastic Programming

Category 2: Nonlinear Optimization (Unconstrained Optimization )

Category 3: Applications -- Science and Engineering (Data-Mining )

Citation: Northwestern University, January 2014

Download: [PDF]

Entry Submitted: 01/29/2014
Entry Accepted: 01/29/2014
Entry Last Modified: 02/17/2015

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society