计算机科学
同态加密
计算机安全
不可用
推论
上传
威胁模型
加密
方案(数学)
MNIST数据库
机器学习
深度学习
人工智能
万维网
工程类
数学分析
数学
可靠性工程
作者
Yuan Zheng,Youliang Tian,Zhou Zhou,Ta Li,Shuai Wang,Jinbo Xiong
出处
期刊:IEEE Transactions on Network Science and Engineering
[Institute of Electrical and Electronics Engineers]
日期:2024-01-05
卷期号:11 (5): 3969-3982
被引量:12
标识
DOI:10.1109/tnse.2024.3350365
摘要
In the era of Web 3.0, federated learning has emerged as a crucial technical method in resolving conflicts between data security and open sharing. However, federated learning is susceptible to various malicious behaviors, including inference attacks, poisoning attacks, and free-riding attacks. These adversarial activities can lead to privacy breaches, unavailability of global models, and unfair training processes. To tackle these challenges, we propose a trustworthy federated learning scheme (TWFL) that can resist the above malicious attacks. Specifically, we firstly propose a novel adaptive method based on two-trapdoor homomorphic encryption to encrypt gradients uploaded by users, thereby resisting inference attacks. Secondly, we design confidence calculation and contribution calculation mechanisms to resist poisoning attacks and free-riding attacks. Finally, we prove the security of our scheme through formal security analysis, and demonstrate through experiments conducted on MNIST and FASHIONMNIST datasets that TWFL achieves a higher model accuracy of 2%–3% compared to traditional methods such as Median and Trim-mean. In summary, TWFL can not only resist a variety of attacks but also ensure improved accuracy, which is enough to prove that it is a trustworthy solution suitable for Web 3.0 privacy protection scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI