差别隐私
计算机科学
Byzantine容错
联合学习
对抗制
稳健性(进化)
对手
MNIST数据库
分布式学习
量子拜占庭协议
方案(数学)
协议(科学)
人工智能
机器学习
计算机安全
深度学习
理论计算机科学
分布式计算
数据挖掘
容错
替代医学
数学
病理
基因
医学
心理学
数学分析
生物化学
化学
教育学
作者
Xu Ma,Xiaoqian Sun,Yuduo Wu,Zheli Liu,Xiaofeng Chen,Changyu Dong
出处
期刊:IEEE Transactions on Parallel and Distributed Systems
[Institute of Electrical and Electronics Engineers]
日期:2022-12-01
卷期号:33 (12): 3690-3701
被引量:21
标识
DOI:10.1109/tpds.2022.3167434
摘要
Federated learning is a collaborative machine learning framework where a global model is trained by different organizations under the privacy restrictions. Promising as it is, privacy and robustness issues emerge when an adversary attempts to infer the private information from the exchanged parameters or compromise the global model. Various protocols have been proposed to counter the security risks, however, it becomes challenging when one wants to make federated learning protocols robust against Byzantine adversaries while preserving the privacy of the individual participant. In this article, we propose a differentially private Byzantine-robust federated learning scheme (DPBFL) with high computation and communication efficiency. The proposed scheme is effective in preventing adversarial attacks launched by the Byzantine participants and achieves differential privacy through a novel aggregation protocol in the shuffle model. The theoretical analysis indicates that the proposed scheme converges to the approximate optimal solution with the learning error dependent on the differential privacy budget and the number of Byzantine participants. Experimental results on MNIST, FashionMNIST and CIFAR10 demonstrate that the proposed scheme is effective and efficient.
科研通智能强力驱动
Strongly Powered by AbleSci AI