-

 

 

 




Optimization Online





 

Quasi-Newton Methods for Deep Learning: Forget the Past, Just Sample

Albert S. Berahas (albertberahas***at***lehigh.edu)
Majid Jahani (maj316***at***lehigh.edu)
Martin Takáč (Takac.MT***at***gmail.com)

Abstract: We present two sampled quasi-Newton methods: sampled LBFGS and sampled LSR1. Contrary to the classical variants of these methods that sequentially build (inverse) Hessian approximations as the optimization progresses, our proposed methods sample points randomly around the current iterate to produce these approximations. As a result, the approximations constructed make use of more reliable (recent and local) information, and do not depend on past information that could be significantly stale. Our proposed algorithms are efficient in terms of accessed data points (epochs) and have enough concurrency to take advantage of distributed computing environments. We provide convergence guarantees for our proposed methods. Numerical tests on a toy classification problem and on popular benchmarking neural network training tasks reveal that the methods outperform their classical variants and are competitive with first-order methods such as ADAM.

Keywords: quasi-Newton, curvature pairs, sampling, deep learning

Category 1: Nonlinear Optimization

Category 2: Nonlinear Optimization (Unconstrained Optimization )

Citation: Lehigh University, January 2019

Download: [PDF]

Entry Submitted: 02/02/2019
Entry Accepted: 02/02/2019
Entry Last Modified: 05/28/2019

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society