- General risk measures for robust machine learning Émilie Chouzenoux(emilie.chouzenouxcentralesupelec.fr) Henri Gérard(hgerard.progmail.com) Jean-Christophe Pesquet(jean-christophepesquet.eu) Abstract: A wide array of machine learning problems are formulated as the minimization of the expectation of a convex loss function on some parameter space. Since the probability distribution of the data of interest is usually unknown, it is is often estimated from training sets, which may lead to poor out-of-sample performance. In this work, we bring new insights in this problem by using the framework which has been developed in quantitative finance for risk measures. We show that the original min-max problem can be recast as a convex minimization problem under suitable assumptions. We discuss several important examples of robust formulations, in particular by defining ambiguity sets based on $\varphi$-divergences and the Wasserstein metric. We also propose an efficient algorithm for solving the corresponding convex optimization problems involving complex convex constraints. Through simulation examples, we demonstrate that this algorithm scales well on real data sets. Keywords: Risk measures, robust statistics, machine learning, convex optimization, divergences, Wasserstein distance. Category 1: Robust Optimization Category 2: Convex and Nonsmooth Optimization (Convex Optimization ) Category 3: Stochastic Programming Citation: CentraleSupélec, Inria Saclay, Université Paris-Saclay, Center for Visual Computing, Gif sur Yvette, 91190, France and Université Paris-Est, CERMICS (ENPC), Labex Bézout, 6-8 avenue Blaise Pascal, Champs-sur-Marne, 77420, France April 2019 Download: [PDF]Entry Submitted: 04/24/2019Entry Accepted: 04/24/2019Entry Last Modified: 04/24/2019Modify/Update this entry Visitors Authors More about us Links Subscribe, Unsubscribe Digest Archive Search, Browse the Repository Submit Update Policies Coordinator's Board Classification Scheme Credits Give us feedback Optimization Journals, Sites, Societies Optimization Online is supported by the Mathematical Optmization Society.