Understanding Distributed Poisoning Attack in Federated Learning
联合学习
计算机科学
计算机安全
可信赖性
方案(数学)
人工智能
数学
数学分析
作者
Di Cao,Shan Chang,Zhijian Lin,Guohua Li,Deqing Sun
出处
期刊:International Conference on Parallel and Distributed Systems日期:2019-12-01被引量:96
标识
DOI:10.1109/icpads47876.2019.00042
摘要
Federated learning is inherently vulnerable to poisoning attacks, since no training samples will be released to and checked by trustworthy authority. Poisoning attacks are widely investigated in centralized learning paradigm, however distributed poisoning attacks, in which more than one attacker colludes with each other, and injects malicious training samples into local models of their own, may result in a greater catastrophe in federated learning intuitively. In this paper, through real implementation of a federated learning system and distributed poisoning attacks, we obtain several observations about the relations between the number of poisoned training samples, attackers, and attack success rate. Moreover, we propose a scheme, Sniper, to eliminate poisoned local models from malicious participants during training. Sniper identifies benign local models by solving a maximum clique problem, and suspected (poisoned) local models will be ignored during global model updating. Experimental results demonstrate the efficacy of Sniper. The attack success rates are reduced to around 2% even a third of participants are attackers.