-

 

 

 




Optimization Online





 

Adaptive Sampling Quasi-Newton Methods for Derivative-Free Stochastic Optimization

Raghu Bollapragada(raghu.bollapragada***at***utexas.edu)
Stefan M. Wild(wild***at***anl.gov)

Abstract: We consider stochastic zero-order optimization problems, which arise in settings from simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients of a stochastic function using finite differences within a common random number framework. We employ modified versions of a norm test and an inner product quasi-Newton test to control the sample sizes used in the stochastic approximations. We provide preliminary numerical experiments to illustrate potential performance benefits of the proposed method.

Keywords: Adaptive Sampling, Derivative-Free Optimization, Stochastic Optimization

Category 1: Nonlinear Optimization

Category 2: Stochastic Programming

Category 3: Convex and Nonsmooth Optimization (Convex Optimization )

Citation:

Download: [PDF]

Entry Submitted: 10/31/2019
Entry Accepted: 10/31/2019
Entry Last Modified: 10/31/2019

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society