Reinforcement Learning via Parametric Cost Function Approximation for Multistage Stochastic Programming

The most common approaches for solving stochastic resource allocation problems in the research literature is to either use value functions (``dynamic programming") or scenario trees (``stochastic programming") to approximate the impact of a decision now on the future. By contrast, common industry practice is to use a deterministic approximation of the future which is easier to understand and solve, but which is criticized for ignoring uncertainty. We show that a parameterized version of a deterministic lookahead can be an effective way of handling uncertainty, while enjoying the computational simplicity of a deterministic lookahead. We present the parameterized lookahead model as a form of policy for solving a stochastic base model, which is used as the basis for optimizing the parameterized policy. This approach can handle complex, high-dimensional state variables, and avoids the usual approximations associated with scenario trees. We formalize this approach and demonstrate its use in the context of a complex, nonstationary energy storage problem.

Article

Download

View Reinforcement Learning via Parametric Cost Function Approximation for Multistage Stochastic Programming