Design of Poisoning Attacks on Linear Regression Using Bilevel Optimization

Poisoning attack is one of the attack types commonly studied in the field of adversarial machine learning. The adversary generating poison attacks is assumed to have access to the training process of a machine learning algorithm and aims to prevent the algorithm from functioning properly by injecting manipulative data while the algorithm is being trained. In this work, our focus is on poisoning attacks against linear regression models which target to weaken the prediction power of the attacked regression model. We propose a bilevel optimization problem to model this adversarial process between the attacker generating poisoning attacks and the learner which tries to learn the best predictive regression model. We give an alternative single level optimization problem by benefiting from the optimality conditions of the learner’s problem. A commercial solver is used to solve the resulting single level optimization problem where we generate the whole set of poisoning attack samples at once. Besides, an iterative approach that allows to determine only a portion of poisoning attack samples at every iteration is introduced. The proposed attack strategies are shown to be superior than a benchmark algorithm from the literature by carrying out extensive experiments on two realistic datasets.

Citation

Technical report, University of Edinburgh, INRIA-Lille Europe and CNRS Centrale Lille, May 2021.

Article

Download

View Design of Poisoning Attacks on Linear Regression Using Bilevel Optimization