差别隐私
计算机科学
架空(工程)
可用性
特征(语言学)
隐私软件
信息隐私
随机梯度下降算法
数据挖掘
机器学习
计算机安全
人机交互
人工神经网络
语言学
操作系统
哲学
作者
Jie Ling,Junchang Zheng,Jiahui Chen
标识
DOI:10.1016/j.cose.2024.103715
摘要
Federated learning (FL) is a distributed machine learning method that effectively protects personal data. Many studies on federated learning assumed that all clients have consistent privacy parameters. However, in practice, different clients have different privacy requirements, and heterogeneous differential privacy can personalize privacy protection according to each client's privacy budget and requirements. In this study, we propose an improved efficient FL privacy preservation method with heterogeneous differential privacy, which can compute the corresponding privacy budget weights for each client according to noise size using the secure differential privacy stochastic gradient descent protocol, histogram of oriented gradients feature extraction and weighted averaging of the heterogeneous privacy budgets. Through this method, the noisier clients are given smaller privacy budgets weights to mitigate their negative impact on the aggregation model. Experiments comparing the baseline method were performed on the MNIST , fMNIST and cifar10 datasets. More precisely, the experimental results showed that our method improves the model accuracy by 6.68% and 7.18% of 20 to 50 clients and 16.08% and 17.37% of 60 to 100 clients, respectively. Moreover, the communication overhead time was reduced by 23.85%, which validates the effectiveness and usability of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI