-

 

 

 




Optimization Online





 

Revisiting Approximate Linear Programming Using a Saddle Point Approach

Qihang Lin (qihang-lin***at***uiowa.edu)
Selvaprabu Nadarajah (selvan***at***uic.edu)
Negar Soheili (nazad***at***uic.edu)

Abstract: Approximate linear programs (ALPs) are well-known models for computing value function approximations (VFAs) of intractable Markov decision processes (MDPs) arising in applications. VFAs from ALPs have desirable theoretical properties, define an operating policy, and provide a lower bound on the optimal policy cost, which can be used to assess the suboptimality of heuristic policies. However, solving ALPs near-optimally remains challenging, for example, when approximating MDPs with nonlinear cost functions and transition dynamics or when rich basis functions are required to obtain a good VFA. We address this tension between theory and solvability by proposing a convex saddle-point reformulation of an ALP that includes as primal and dual variables, respectively, a vector of basis function weights and a constraint violation density function over the state-action space. To solve this reformulation, we develop a proximal stochastic mirror descent (PSMD) method. We establish that PSMD returns a near-optimal ALP solution and a lower bound on the optimal policy cost in a finite number of iterations with high probability. We numerically compare PSMD with several benchmarks on inventory control and energy storage applications. We find that the PSMD lower bound is tighter than a perfect information bound. In contrast, the constraint sampling approach to solve ALPs may not provide a lower bound and applying row generation to tackle ALPs is not computationally viable. PSMD policies outperform problem-specific heuristics and are comparable to or better than the policies obtained using constraint sampling. Overall, our ALP reformulation and solution approach broadens the applicability of approximate linear programming.

Keywords: approximate linear programming, approximate dynamic programming, Markov decision processes, first order methods, energy storage, inventory control

Category 1: Other Topics (Dynamic Programming )

Category 2: Stochastic Programming

Category 3: Infinite Dimensional Optimization (Semi-infinite Programming )

Citation: University of Illinois at Chicago and University of Iowa working paper, June 2011

Download: [PDF]

Entry Submitted: 06/11/2017
Entry Accepted: 06/12/2017
Entry Last Modified: 06/11/2018

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society