计算机科学
上传
MNIST数据库
对抗制
任务(项目管理)
人工智能
联合学习
机器学习
审计
GSM演进的增强数据速率
计算机安全
深度学习
对手
万维网
工程类
经济
管理
系统工程
作者
Ying Zhao,Junjun Chen,Jiale Zhang,Di Wu,Michael Blumenstein,Shui Yu
摘要
Summary In the age of the Internet of Things (IoT), large numbers of sensors and edge devices are deployed in various application scenarios; Therefore, collaborative learning is widely used in IoT to implement crowd intelligence by inviting multiple participants to complete a training task. As a collaborative learning framework, federated learning is designed to preserve user data privacy, where participants jointly train a global model without uploading their private training data to a third party server. Nevertheless, federated learning is under the threat of poisoning attacks, where adversaries can upload malicious model updates to contaminate the global model. To detect and mitigate poisoning attacks in federated learning, we propose a poisoning defense mechanism, which uses generative adversarial networks to generate auditing data in the training procedure and removes adversaries by auditing their model accuracy. Experiments conducted on two well‐known datasets, MNIST and Fashion‐MNIST, suggest that federated learning is vulnerable to the poisoning attack, and the proposed defense method can detect and mitigate the poisoning attack.
科研通智能强力驱动
Strongly Powered by AbleSci AI