计算机科学
上传
联合学习
计算机安全
架空(工程)
保密
比例(比率)
人工智能
机器学习
万维网
量子力学
操作系统
物理
作者
Jiaqi Zhao,Hui Zhu,Fengwei Wang,Yandong Zheng,Rongxing Lu,Hui Li
出处
期刊:IEEE Transactions on Services Computing
[Institute of Electrical and Electronics Engineers]
日期:2024-03-20
卷期号:17 (5): 2320-2333
被引量:2
标识
DOI:10.1109/tsc.2024.3377931
摘要
The ever-growing data scale and increasingly strict privacy restraint have recently drawn extensive attention to federated learning (FL) as a multi-party machine learning paradigm for achieving high-quality model construction without data collection. Nevertheless, uploading local models in FL can still be exploited by adversaries to infer participants' sensitive data. Furthermore, it is possible for malicious participants to manipulate the global model by submitting poisonous local models. To tackle these challenges, this paper proposes an efficient and privacy-preserving federated learning framework against poisoning adversaries, namely ELFL, which can ensure the confidentiality of local models while effectively resisting data poisoning attacks. Specifically, we first design a grouped secure aggregation algorithm, through which the aggregation server can compute the summations of local models inside logic groups but cannot see individual ones. Then, based on grouped aggregations, our poisoning defense mechanism could detect and quickly phase out malicious participants from training candidates. Moreover, the computational complexity of participants is independent of their total number, so it is suitable for large-scale scenes. Detailed security analysis demonstrates the security of ELFL. Experimental results show that ELFL could maintain a high accuracy against representative data poisoning attacks, and its computational and communication overhead is indeed low.
科研通智能强力驱动
Strongly Powered by AbleSci AI