Adjustability in Robust Linear Optimization

We investigate the concept of adjustability -- the difference in objective values between two types of dynamic robust optimization formulations: one where (static) decisions are made before uncertainty realization, and one where uncertainty is resolved before (adjustable) decisions. This difference reflects the value of information and decision timing in optimization under uncertainty, and is related to several other concepts such as interchangeability in games and optimality of decision rules in robust optimization. We develop a theoretical framework to quantify adjustability based on the input data of a robust optimization problem with linear objective, linear constraints, and fixed recourse. We make very few additional assumptions. In particular, we do not assume constraint-wise separability or parameter nonnegativity that are commonly imposed in the literature for the study of adjustability. This allows us to study important but previously under-investigated problems, such as formulations with equality constraints and problems with both upper and lower bound constraints. Based on the discovery of an interesting connection between the reformulations of the static and fully adjustable problems, our analysis gives a necessary and sufficient condition -- in the form of a theorem-of-the-alternatives -- for adjustability to be zero when the uncertainty set is polyhedral. Based on this sharp characterization, we provide a mixed-integer optimization formulation as a certificate of zero adjustability. Then, we develop a constructive approach to quantify adjustability when the uncertainty set is general, which results in an efficient and tight algorithm to bound adjustability. We demonstrate the efficiency and tightness via both theoretical and numerical analyses.

Article

Download

View Adjustability in Robust Linear Optimization