计算机科学
差别隐私
上传
通知
泄漏(经济)
趋同(经济学)
梯度下降
信息泄露
私人信息检索
联合学习
计算机安全
数据挖掘
人工智能
人工神经网络
宏观经济学
经济
法学
操作系统
经济增长
政治学
作者
Jiahui Hu,Zhibo Wang,Shen Yong-sheng,Bohan Lin,Peng Sun,Xiaoyi Pang,Jian Liu,Kui Ren
标识
DOI:10.1109/tnet.2023.3317870
摘要
Federated learning (FL) requires frequent uploading and updating of model parameters, which is naturally vulnerable to gradient leakage attacks (GLAs) that reconstruct private training data through gradients. Although some works incorporate differential privacy (DP) into FL to mitigate such privacy issues, their performance is not satisfactory since they did not notice that GLA incurs heterogeneous risks of privacy leakage (RoPL) with respect to gradients from different communication rounds and clients. In this paper, we propose an Adaptive Privacy-Preserving Federated Learning (Adp-PPFL) framework to achieve satisfactory privacy protection against GLA, while ensuring good performance in terms of model accuracy and convergence speed. Specifically, a leakage risk-aware privacy decomposition mechanism is proposed to provide adaptive privacy protection to different communication rounds and clients by dynamically allocating the privacy budget according to the quantified RoPL. In particular, we exploratively design a round-level and a client-level RoPL quantification method to measure the possible risks of GLA breaking privacy from gradients in different communication rounds and clients respectively, which only employ the limited information in general FL settings. Furthermore, to improve the FL model training performance (i.e., convergence speed and global model accuracy), we propose an adaptive privacy-preserving local training mechanism that dynamically clips the gradients and decays the noises added to the clipped gradients during the local training process. Extensive experiments show that our framework outperforms the existing differentially private FL schemes on model accuracy, convergence, and attack resistance.
科研通智能强力驱动
Strongly Powered by AbleSci AI