-

 

 

 




Optimization Online





 

Trust-Region Optimization Methods Using Limited-Memory Symmetric Rank-One Updates for Off-The-Shelf Machine Learning

Jennifer Erway (erwayjb***at***wfu.edu)
Joshua Griffin (Joshua.Griffin***at***sas.com)
Riadh Omheni (Riadh.Omheni***at***sas.com)
Roummel Marcia (rmarcia***at***ucmerced.edu)

Abstract: Machine learning (ML) problems are often posed as highly nonlinear and nonconvex unconstrained optimization problems. Methods for solving ML problems based on stochastic gradient descent generally requires fine-tuning many hyper-parameters. In this paper we propose an alternative approach for solving ML problems based on a quasi-Newton trust-region framework that does not require extensive parameter tuning. Numerical experiments on the MNIST data set show that with a fixed computational time budget, the proposed approach achieves better results than other off-the-shelf approaches like limited-memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton methods and Hessian-free methods.

Keywords: Nonconvex optimization, quasi-Newton methods, symmetric-rank one update, machine learning

Category 1: Nonlinear Optimization (Unconstrained Optimization )

Category 2: Applications -- Science and Engineering (Other )

Citation: Technical report 2017-2, Department of Mathematics and Statistics, Wake Forest University (2017)

Download: [PDF]

Entry Submitted: 10/26/2017
Entry Accepted: 10/26/2017
Entry Last Modified: 10/26/2017

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society