后门
影子(心理学)
利用
计算机科学
计算机安全
模型攻击
心理学
心理治疗师
作者
Qixian Ren,Yu Zheng,Yang Liu,Yue Li,Jianfeng Ma
标识
DOI:10.1016/j.cose.2024.103740
摘要
Federated learning systems enable data localization by aggregating model parameters from all parties for global model training, but they also expose new security threats due to their distributed learning approach and multi-party heterogeneous data distribution. Backdoor attacks exploit the inability of federated learning systems to audit client data, and have a huge advantage in injecting backdoor into global models by submitting poisoned model updates. That causes the global model to be backdoored after the aggregation model updates, leading to catastrophic model security problems. Current existing studies used a distributed strategy to deploy backdoor attack in federated learning, however, the attack persistence is not good enough. To achieve better attack performance, this paper proposes a novel backdoor attack method against federated learning systems, which we name Shadow Backdoor Attack (SBA). Our SBA method innovates on attack deployment and creatively introduces a new concept of Attacker Intensity, which distinguish the different roles of attackers in backdoor attack against federated learning. SBA implements attacks through a combination of different intensities attackers, which makes the sustained effect of the attack significantly improved compared with previous work. Several experiments demonstrate that SBA has a high attack success rate and more sustained attack effect. Moreover, we analyze the impact of trigger criteria in SBA and confirm the attack effectiveness of SBA against two robust FL algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI