计算机科学
正确性
可验证秘密共享
云计算
计算机安全
遮罩(插图)
对手
过程(计算)
联合学习
信息隐私
协议(科学)
保密
加密
人工智能
算法
医学
操作系统
艺术
病理
视觉艺术
集合(抽象数据类型)
程序设计语言
替代医学
作者
Guowen Xu,Hongwei Li,Sen Liu,Kan Yang,Xiaodong Lin
标识
DOI:10.1109/tifs.2019.2929409
摘要
As an emerging training model with neural networks, federated learning has received widespread attention due to its ability to update parameters without collecting users' raw data. However, since adversaries can track and derive participants' privacy from the shared gradients, federated learning is still exposed to various security and privacy threats. In this paper, we consider two major issues in the training process over deep neural networks (DNNs): 1) how to protect user's privacy (i.e., local gradients) in the training process and 2) how to verify the integrity (or correctness) of the aggregated results returned from the server. To solve the above problems, several approaches focusing on secure or privacy-preserving federated learning have been proposed and applied in diverse scenarios. However, it is still an open problem enabling clients to verify whether the cloud server is operating correctly, while guaranteeing user's privacy in the training process. In this paper, we propose VerifyNet, the first privacy-preserving and verifiable federated learning framework. In specific, we first propose a double-masking protocol to guarantee the confidentiality of users' local gradients during the federated learning. Then, the cloud server is required to provide the Proof about the correctness of its aggregated results to each user. We claim that it is impossible that an adversary can deceive users by forging Proof, unless it can solve the NP-hard problem adopted in our model. In addition, VerifyNet is also supportive of users dropping out during the training process. The extensive experiments conducted on real-world data also demonstrate the practical performance of our proposed scheme.
科研通智能强力驱动
Strongly Powered by AbleSci AI