遗忘
可扩展性
利用
计算机科学
组分(热力学)
过程(计算)
联合学习
计算机安全
人工智能
分布式计算
数据库
哲学
语言学
物理
热力学
操作系统
作者
Chen Zhang,Boyang Zhou,Zhiqiang He,Zeyuan Liu,Yanjiao Chen,Wenyuan Xu,Baochun Li
标识
DOI:10.1109/infocom53939.2023.10228981
摘要
Federated learning is exposed to model poisoning attacks as compromised clients may submit malicious model updates to pollute the global model. To defend against such attacks, robust aggregation rules are designed for the centralized server to winnow out outlier updates, and to significantly reduce the effectiveness of existing poisoning attacks. In this paper, we develop an advanced model poisoning attack against defensive aggregation rules. In particular, we exploit the catastrophic forgetting phenomenon during the process of continual learning to destroy the memory of the global model. Our proposed framework, called Oblivion, features two special components. The first component prioritizes the weights that have the most influence on the model accuracy for poisoning, which induces a more significant degradation on the global model than equally perturbing all weights. The second component smooths malicious model updates based on the number of selected compromised clients in the current round, adjusting the degree of poisoning to suit the dynamics of each training round. We implement a fully-functional prototype of Oblivion in PLATO, a real-world scalable federated learning framework. Our extensive experiments over three datasets demonstrate that Oblivion can boost the attack performance of model poisoning attacks against unknown defensive aggregation rules.
科研通智能强力驱动
Strongly Powered by AbleSci AI