后门
计算机科学
MNIST数据库
聚类分析
联合学习
计算机安全
算法
数据挖掘
人工智能
深度学习
作者
Yongkang Wang,Di‐Hua Zhai,Yi He,Yuanqing Xia
标识
DOI:10.1016/j.future.2023.01.026
摘要
To address the backdoor attacks in federated learning due to the inherently distributed and privacy-preserving peculiarities, we propose RDFL including four components: selecting the eligible parameters to compute the cosine distance; executing adaptive clustering; detecting and removing the suspicious malicious local models; performing adaptive clipping and noising operations. We evaluate the performance of RDFL compared with the existing baselines on MNIST, FEMNIST, and CIFAR-10 datasets under non-independent and identically distributed scenario, and we consider various attack scenarios, including the different numbers of malicious attackers, distributed backdoor attack, different poison ratios of local data and model poisoning attack. Experimental results show that RDFL can effectively mitigate the backdoor attacks, and outperforms the compared baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI