Risk-Sensitive Variational Bayes: Formulations and Bounds

We study data-driven decision-making problems in a parametrized Bayesian framework. We adopt a risk-sensitive approach to modeling the interplay between statistical estimation of parameters and optimization, by computing a risk measure over a loss/disutility function with respect to the posterior distribution over the parameters. While this forms the standard Bayesian decision-theoretic approach, we focus on problems where calculating the posterior distribution is intractable, a typical situation in modern applications with large datasets, heterogeneity due to observed covariates and latent group structure. The key methodological innovation we introduce in this paper is to leverage a dual representation of the risk measure to introduce an optimization-based framework for approximately computing the posterior risk-sensitive objective, as opposed to using standard sampling based methods such as Markov Chain Monte Carlo. Our analytical contributions include rigorously proving finite sample bounds on the ‘optimality gap’ of optimizers obtained using the computational methods in this paper, from the ‘true’ optimizers of a given decision-making problem. We illustrate our results by comparing the theoretical bounds with simulations of a newsvendor problem on two methods extracted from our computational framework.

Citation

Technical Report, School of Industrial Engineering, 315 N. Grant St., West Lafayette IN 47906 05/2019.

Article

Download

View Risk-Sensitive Variational Bayes: Formulations and Bounds