Optimization Online


A Stochastic Trust Region Algorithm Based on Careful Step Normalization

Frank E. Curtis (frank.e.curtis***at***lehigh.edu)
Katya Scheinberg (katyas***at***lehigh.edu)
Rui Shi (rus415***at***lehigh.edu)

Abstract: An algorithm is proposed for solving stochastic and finite sum minimization problems. Based on a trust region methodology, the algorithm employs normalized steps, at least as long as the norms of the stochastic gradient estimates are within a specified interval. The complete algorithm—which dynamically chooses whether or not to employ normalized steps—is proved to have convergence guarantees that are similar to those possessed by a traditional stochastic gradient approach under various sets of conditions related to the accuracy of the stochastic gradient estimates and choice of stepsize sequence. The results of numerical experiments are presented when the method is employed to minimize convex and nonconvex machine learning test problems. These results illustrate that the method can outperform a traditional stochastic gradient approach.

Keywords: stochastic optimization, finite sum minimization, stochastic gradient method, trust region method, machine learning, logistic regression, deep neural networks

Category 1: Nonlinear Optimization

Category 2: Nonlinear Optimization (Unconstrained Optimization )

Category 3: Stochastic Programming

Citation: Lehigh ISE/COR@L Technical Report 17T-017

Download: [PDF]

Entry Submitted: 12/29/2017
Entry Accepted: 12/29/2017
Entry Last Modified: 06/26/2018

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society