General risk measures for robust machine learning

A wide array of machine learning problems are formulated as the minimization of the expectation of a convex loss function on some parameter space. Since the probability distribution of the data of interest is usually unknown, it is is often estimated from training sets, which may lead to poor out-of-sample performance. In this work, we bring new insights in this problem by using the framework which has been developed in quantitative finance for risk measures. We show that the original min-max problem can be recast as a convex minimization problem under suitable assumptions. We discuss several important examples of robust formulations, in particular by defining ambiguity sets based on $\varphi$-divergences and the Wasserstein metric. We also propose an efficient algorithm for solving the corresponding convex optimization problems involving complex convex constraints. Through simulation examples, we demonstrate that this algorithm scales well on real data sets.

Citation

CentraleSupélec, Inria Saclay, Université Paris-Saclay, Center for Visual Computing, Gif sur Yvette, 91190, France and Université Paris-Est, CERMICS (ENPC), Labex Bézout, 6-8 avenue Blaise Pascal, Champs-sur-Marne, 77420, France April 2019

Article

Download

View General risk measures for robust machine learning