Risk Guarantees for End-to-End Prediction and Optimization Processes

Prediction models are often employed in estimating parameters of optimization models. Despite the fact that in an \emph{end-to-end} view, the real goal is to achieve good optimization performance, the prediction performance is measured on its own. While it is usually believed that good prediction performance in estimating the parameters will result in good subsequent optimization performance, formal theoretical guarantees on this are notably lacking. In this paper, we explore conditions that allow us to explicitly describe how the prediction performance governs the optimization performance. Our weaker condition allows for an asymptotic convergence result, while our stronger condition allows for exact quantification of the optimization performance in terms of the prediction performance. In general, verification of these conditions is a non-trivial task. Nevertheless, we show that our weaker condition is equivalent to the well-known Fisher consistency concept from the learning theory literature. This then allows us to easily check our weaker condition for several loss functions. We also establish that the squared error loss function satisfies our stronger condition. Consequently, we derive the exact theoretical relationship between prediction performance measured with the squared loss and the subsequent optimization performance. In a preliminary computational study on fractional knapsack problems with uncertain profits, we compare the optimization performance of using of several Fisher consistent prediction loss functions with a provably inconsistent loss function.

Citation

Technical report. Tepper School of Business, Carnegie Mellon University

Article

Download

View Risk Guarantees for End-to-End Prediction and Optimization Processes