差别隐私
上传
计算机科学
人为噪声
稳健性(进化)
数据挖掘
计算机网络
发射机
生物化学
基因
操作系统
频道(广播)
化学
作者
Wenling Li,Ping Yu,Yanan Cheng,Jianen Yan,Zhaoxin Zhang
标识
DOI:10.1109/tsc.2024.3399659
摘要
Federated Learning ensures that clients can collaboratively train a global model by uploading local gradients, keeping data locally, and preserving the security of sensitive data. However, studies have shown that attackers can infer local data from gradients, raising the urgent need for gradient protection. The differential privacy technique protects local gradients by adding noise. This paper proposes a federated privacy-enhancing algorithm that combines local differential privacy, parameter sparsification, and weighted aggregation for cross-silo setting. Firstly, our method introduces Renyi differential privacy by ´ adding noise before uploading local parameters, achieving local differential privacy. Moreover, we dynamically adjust the privacy budget to control the amount of noise added, balancing privacy and accuracy. Secondly, considering the diversity of clients' communication abilities, we propose a novel Top-K method with dynamically adjusted parameter upload rates to effectively reduce and properly allocate communication costs. Finally, based on the data volume, trustworthiness, and upload rates of participants, we employ a weighted aggregation method, which enhance the robustness of the privacy framework. Through experiments, we validate the effective trade-off among privacy, accuracy, communication costs and robustness achieved by the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI