Optimization Online


A note on the convergence of the SDDP algorithm

Kengy Barty (kengy.barty***at***edf.fr)

Abstract: In this paper we are interested in the convergence analysis of the Stochastic Dual Dynamic Algorithm (SDDP) algorithm in a general framework, and regardless of whether the underlying probability space is discrete or not. We consider a convex stochastic control program not necessarily linear and the resulting dynamic programming equation. We prove under mild assumptions that the approximations of the Bellman functions using SDDP converge uniformly to the solution of the dynamic programming equation. We prove also that the distribution of the state variable along iterations converges in distribution to a steady state distribution, which turn out to be the optimal distribution of the state variable. We make use of epi-convergence to assert that the sequence of policies along iterations can be sought among continuous policies and converges pointwise to the optimal policy of the problem. All the mentioned results are provided almost surely with respect to a distribution derived from the optimal policy. Furthermore we propose original proofs of these results which naturally are not based on the finite representation of randomness. It seems that these latter results are new so far.

Keywords: Dynamic Porgramming, SDDP

Category 1: Stochastic Programming

Category 2: Other Topics (Dynamic Programming )


Download: [Postscript][PDF]

Entry Submitted: 07/13/2012
Entry Accepted: 07/13/2012
Entry Last Modified: 07/13/2012

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society