后门
计算机科学
计算机安全
对抗制
任务(项目管理)
架空(工程)
人工智能
机器学习
工程类
系统工程
操作系统
作者
Chengcheng Zhu,Jiale Zhang,Xiaobing Sun,Bing Chen,Weizhi Meng
标识
DOI:10.1016/j.cose.2023.103366
摘要
Federated learning enables multi-participant joint modeling with distributed and localized training, thus effectively overcoming the problems of data island and privacy protection. However, existing federated learning frameworks have proven to be vulnerable to backdoor attacks, where attackers embed backdoor triggers into local models during the training phase. These triggers will be activated by crafted inputs during the prediction phase, leading to misclassification targeted by attackers. To address these issues, existing defense methods focus on both backdoor detection and backdoor erasing. However, passive backdoor detection methods cannot eliminate the effect of embedded backdoor patterns, while backdoor erasing may degenerate the model performance and cause extra computation overhead. This paper proposes ADFL, a novel adversarial distillation-based backdoor defense scheme for federated learning. ADFL generates fake samples containing backdoor features by deploying a generative adversarial network (GAN) on the server side and relabeling the fake samples to obtain the distillation dataset. Then, taking the labeled samples as inputs, knowledge distillation which employs the clean model as a teacher and the global model as a student is implemented to revise the global model and eliminate the influence of backdoored Neurons in it, thereby effectively defending against backdoor attacks while maintaining the model performance. Experimental results show that ADFL can lower the attack success rates by 95% while maintaining the main task accuracy above 90%.
科研通智能强力驱动
Strongly Powered by AbleSci AI