差别隐私
计算机科学
上传
洗牌
身份(音乐)
噪音(视频)
人为噪声
方案(数学)
信息隐私
理论计算机科学
计算机安全
人工智能
数据挖掘
计算机网络
数学
万维网
图像(数学)
程序设计语言
数学分析
频道(广播)
物理
发射机
声学
作者
Chen Gu,Xuande Cui,Xiaoling Zhu,Donghui Hu
出处
期刊:IEEE Transactions on Industrial Informatics
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:: 1-12
被引量:4
标识
DOI:10.1109/tii.2023.3331726
摘要
Federated learning (FL) is a promising paradigm for collaboratively training networks on distributed clients while retaining data locally. Recent work has shown that personal data can be recovered even though clients only send gradients to the server. To against the gradient leakage issue, differential privacy (DP)-based solutions are proposed to protect data privacy by adding noise to the gradient before sending it to the server. However, the introduced noise affects the training efficiency of local clients, resulting in low model accuracy. Moreover, the identity privacy of clients has not been seriously considered in FL. In this article, we propose FL2DP, a privacy-preserving scheme focusing on protecting the data privacy as well as the identity privacy of clients. Different from the current schemes that add noise sampled from the Gaussian or Laplace distribution, in our scheme the noise is added to the gradient based on the exponential mechanism to achieve high training efficiency. Then, clients upload the perturbed gradients to a shuffler, which reassigns these gradients with different identities. We give a formal privacy definition called gradient indistinguishability to provide strict unlinkability for gradients shuffle. We propose a new gradient shuffling mechanism by adapting the DP-based exponential mechanism to satisfy gradient indistinguishability using the designed utility function. In this case, an attacker cannot infer the real identity of the client via the shuffled gradient. We conduct extensive experiments on two real-world datasets, and the results demonstrate the effectiveness of the proposed scheme.
科研通智能强力驱动
Strongly Powered by AbleSci AI