Risk-Averse Stochastic Dual Dynamic Programming

We formulate a risk-averse multi-stage stochastic program using conditional value at risk as the risk measure. The underlying random process is assumed to be stage-wise independent, and a stochastic dual dynamic programming (SDDP) algorithm is applied. We discuss the poor performance of the standard upper bound estimator in the risk-averse setting and propose a new approach based on importance sampling, which yields improved upper bound estimators. Modest additional computational effort is required to use our new estimators. Our procedures allow for significant improvement in terms of controlling solution quality in an SDDP algorithm in the risk-averse setting. We give computational results for multi-stage asset allocation using a log-normal distribution for the asset returns.

Citation

The final publication is available at Springer as Evaluating policies in risk-averse multi-stage stochastic programming