Modeling Time-dependent Randomness in Stochastic Dual Dynamic Programming

We consider the multistage stochastic programming problem where uncertainty enters the right-hand sides of the problem. Stochastic Dual Dynamic Programming (SDDP) is a popular method to solve such problems under the assumption that the random data process is stagewise independent. There exist two approaches to incorporate dependence into SDDP. One approach is to model the data process as an autoregressive time series and to reformulate the problem in stagewise independent terms by adding state variables to the model (TS-SDDP). The other approach is to use Markov Chain discretization of the random data process (MC-SDDP). While MC-SDDP can handle any Markovian data process, some advantages of statistical analysis of the policy under the true process are lost. In this work, we compare both approaches based on a computational study using the long-term operational planning problem of the Brazilian interconnected power systems. We find that the optimality bounds computed by the MC-SDDP method close faster than its TS-SDDP counterpart, and the MC-SDDP policy dominates the TS-SDDP policy. When implementing the optimized policies on real data, we observe that not only the method but also the quality of the stochastic model has an impact on policy performance and that using an AVaR formulation is effective in making the policy robust against a misspecified stochastic model.

Citation

Working Paper, Vienna University of Economics and Business

Article

Download

View Modeling Time-dependent Randomness in Stochastic Dual Dynamic Programming