-

 

 

 




Optimization Online





 

Reliable Off-policy Evaluation for Reinforcement Learning

Jie Wang(116010214***at***link.cuhk.edu.cn)
Rui Gao(rui.gao***at***mccombs.utexas.edu)
Hongyuan Zha(zhahy***at***cuhk.edu.cn)

Abstract: In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy using logged trajectory data generated from a different behavior policy, without execution of the target policy. Reinforcement learning in high-stake environments, such as healthcare and education, is often limited to off-policy settings due to safety or ethical concerns, or inability of exploration. Hence it is imperative to quantify the uncertainty of the off-policy estimate before deployment of the target policy. In this paper, we propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged trajectories data. Leveraging methodologies from distributionally robust optimization, we show that with proper selection of the size of the distributional uncertainty set, these estimates serve as confidence bounds with non-asymptotic and asymptotic guarantees under stochastic or adversarial environments. Our results are also generalized to batch reinforcement learning and are supported by empirical analysis.

Keywords: Uncertainty quantification; Reinforcement learning; Wasserstein robust optimization

Category 1: Robust Optimization

Citation:

Download: [PDF]

Entry Submitted: 01/15/2021
Entry Accepted: 01/15/2021
Entry Last Modified: 01/15/2021

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society