适应性
差别隐私
计算机科学
数据聚合器
信息隐私
架空(工程)
人为噪声
计算机网络
量化(信号处理)
冗余(工程)
编码(社会科学)
分布式计算
计算机安全
数据挖掘
算法
频道(广播)
生态学
生物
统计
发射机
无线传感器网络
数学
操作系统
作者
Xuehua Sun,Zengsen Yuan,Xianguang Kong,Liang Xue,Lang He,Lin Ying
出处
期刊:IEEE Internet of Things Journal
[Institute of Electrical and Electronics Engineers]
日期:2024-05-02
卷期号:11 (15): 26430-26443
被引量:2
标识
DOI:10.1109/jiot.2024.3396217
摘要
Federated Learning (FL) aims to protect data privacy while aggregating models. Existing works rarely focus simultaneously on the three issues of communication efficiency, privacy, and utility, which are the three main challenges facing FL. Specifically, sensitive information about the training data can still be inferred from the model parameters shared in FL. In recent years, Differential Privacy (DP) has been applied in FL to protect data privacy. The challenge of implementing DP in FL lies in the detrimental impact of differential privacy noise on model accuracy. The DP noise affects the convergence of the model, leading to additional communication overhead. Moreover, considering the inherently high communication costs of FL, FL process can be inefficient or even infeasible. In view of these, we propose a novel Differentially Private Federated Learning (DPFL) scheme named Adap-FedITK, which aims to achieve low communication overhead and high model accuracy while guaranteeing client-level DP. Specifically, we dynamically adjust the gradient clipping threshold for different clients in each round, based on the heterogeneity of gradients. This approach aims to mitigate the negative impact of DP and achieve a privacy-utility trade-off. To alleviate the high communication overhead problem in FL, we introduce an improved Top-k algorithm, which utilizes sparsity and quantization to compress the model eliminates communication redundancy, and it also integrates coding techniques to further reduce communication. Extensive experimental results demonstrate that our method achieves the privacy-utility trade-off and improves communication efficiency while ensuring client-level DPFL.
科研通智能强力驱动
Strongly Powered by AbleSci AI