后门
计算机科学
计算机安全
联合学习
异常检测
深度学习
国家(计算机科学)
训练集
人工智能
机器学习
算法
作者
Xueluan Gong,Yanjiao Chen,Qian Wang,Weihan Kong
标识
DOI:10.1109/mwc.017.2100714
摘要
The federated learning framework is designed for massively distributed training of deep learning models among thousands of participants without compromising the privacy of their training datasets. The training dataset across participants usually has heterogeneous data distributions. Besides, the central server aggregates the updates provided by different parties, but has no visibility into how such updates are created. The inherent characteristics of federated learning may incur a severe security concern. The malicious participants can upload poisoned updates to introduce backdoored functionality into the global model, in which the backdoored global model will misclassify all the malicious images (i.e., attached with the backdoor trigger) into a false label but will behave normally in the absence of the backdoor trigger. In this work, we present a comprehensive review of the state-of-the-art backdoor attacks and defenses in federated learning. We classify the existing backdoor attacks into two categories: data poisoning attacks and model poisoning attacks, and divide the defenses into anomaly updates detection, robust federated training, and backdoored model restoration. We give a detailed comparison of both attacks and defenses through experiments. Lastly, we pinpoint a variety of potential future directions of both backdoor attacks and defenses in the framework of federated learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI