计算机科学
联合学习
正确性
单点故障
任务(项目管理)
方案(数学)
可信赖性
计算机安全
加密
信息隐私
分布式计算
算法
数学
数学分析
经济
管理
作者
Lingling Wang,Xueqin Zhao,Zhongkai Lu,Lin Wang,Shouxun Zhang
标识
DOI:10.1016/j.ins.2023.01.130
摘要
Decentralized federated learning (DFL) is an emerging privacy-preserving machine learning framework, where multiple data owners cooperate to train a global model without any aggregation server. In addition to protecting data privacy, it also avoids a single point of failure, and reduces the communication traffic congestion due to the central server. However, there are still many inherent privacy and security issues, e.g., training samples could be revealed by inferring gradients and malicious participants tend not to execute federated learning tasks as intended. Even worse, existing works rarely simultaneously consider a fundamental issue that data owners may join in and drop out of a DFL task at any time. In this paper, we propose PTDFL, a privacy-enhanced and trustworthy decentralized federated learning scheme. Specifically, we firstly design an efficient gradient encryption algorithm to protect data privacy, and then devise a succinct proof without trapdoors to ensure the correctness of the gradients. Meanwhile, we design a novel local aggregation strategy without trusted third party to ensure that the aggregated result is trustworthy. Moreover, our PTDFL is also supportive of data owners joining in and dropping out during the whole DFL task. Finally, we provide privacy and security analysis, implement a prototype of the PTDFL, and conduct extensive experiments on a real dataset. The results show that our PTDFL achieves more efficiency than prior works.
科研通智能强力驱动
Strongly Powered by AbleSci AI