计算机科学
后门
方案(数学)
客户端
过程(计算)
计算机安全
服务器端
能见度
计算机网络
数据挖掘
数学
操作系统
光学
物理
数学分析
作者
Lingchen Zhao,Shengshan Hu,Qian Wang,Jianlin Jiang,Shen Chao,Xiangyang Luo,Pengfei Hu
出处
期刊:IEEE Transactions on Dependable and Secure Computing
[Institute of Electrical and Electronics Engineers]
日期:2020-01-01
卷期号:: 1-1
被引量:90
标识
DOI:10.1109/tdsc.2020.2986205
摘要
Collaborative learning allows multiple clients to train a joint model without sharing their data with each other. Each client performs training locally and then submits the model updates to a central server for aggregation. Since the server has no visibility into the process of generating the updates, collaborative learning is vulnerable to poisoning attacks where a malicious client can generate a poisoned update to introduce backdoor functionality to the joint model. The existing solutions for detecting poisoned updates, however, fail to defend against the recently proposed attacks, especially in the non-IID (independent and identically distributed) setting. In this article, we present a novel defense scheme to detect anomalous updates in both IID and non-IID settings. Our key idea is to realize client-side cross-validation, where each update is evaluated over other clients' local data. The server will adjust the weights of the updates based on the evaluation results when performing aggregation. To adapt to the unbalanced distribution of data in the non-IID setting, a dynamic client allocation mechanism is designed to assign detection tasks to the most suitable clients. During the detection process, we also protect the client-level privacy to prevent malicious clients from knowing the participations of other clients, by integrating differential privacy with our design without degrading the detection performance. Our experimental evaluations on three real-world datasets show that our scheme is significantly robust to two representative poisoning attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI