计算机科学
隐私保护
信息隐私
计算机安全
差别隐私
联合学习
互联网隐私
隐私软件
人工智能
数据挖掘
作者
Zhe Li,Honglong Chen,Zhichen Ni,Yudong Gao,Wei Lou
标识
DOI:10.1109/tmc.2024.3443862
摘要
Federated learning (FL) is an effective privacy-preserving mechanism that collaboratively trains the global model in a distributed manner by solely sharing model parameters rather than data from local clients, like mobile devices, to a central server. Nevertheless, recent studies have illustrated that FL still suffers from gradient leakage as adversaries try to recover training data by analyzing shared parameters from local clients. To address this issue, differential privacy (DP) is adopted to add noise to the parameters of local models before aggregation occurs on the server. It, however, results in the poor performance of gradient-based interpretability, since some important weights capturing the salient region in feature maps will be perturbed. To overcome this problem, we propose a simple yet effective adaptive gradient protection (AGP) mechanism that selectively adds noisy perturbations to certain channels of each client model that have a relatively small impact on interpretability. We also offer a theoretical analysis of the convergence of FL using our method. The evaluation results on both IID and Non-IID data demonstrate that the proposed AGP can achieve a good trade-off between privacy protection and interpretability in FL. Furthermore, we verify the robustness of the proposed method against two different gradient leakage attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI