差别隐私
计算机科学
信息隐私
隐私软件
推论
深度学习
数据建模
机器学习
人工智能
计算机安全
数据挖掘
数据库
作者
Wenlong Song,Hong Chen,Zhijie Qiu,Lei Luo
标识
DOI:10.1109/bigdata59044.2023.10386546
摘要
With the rapid growth of data and the increasing awareness of privacy protection, data privacy issues have become particularly important in the field of machine learning. Federated learning, as a distributed learning method, achieves collaborative training of models while preserving data privacy by keeping the data stationary and allowing the model to move. However, during the federated learning process, there is still a risk of privacy leakage when aggregating the intermediate parameters of models trained by different data providers. Researchers have found that adding noise to the intermediate parameters of the model using differential privacy can effectively prevent privacy inference on the data contributors. Nevertheless, there exists an inherent trade-off between the accuracy and privacy in federated learning models under differential privacy. Strengthening privacy protection often leads to a decrease in model performance. This trade-off becomes more pronounced in complex deep learning models that require multiple iterations to converge. To address the issues of data privacy, data silos, and the trade-off between data privacy leakage and model availability in deep learning within federated learning, this paper proposes a relaxed differential privacy federated learning approach. It reduces the impact of noise on the final results by selectively perturbing gradients when data providers return intermediate model parameters. Experiments demonstrate that this approach achieves a high level of accuracy while preserving data privacy. Additionally, it exhibits superior performance in terms of computational efficiency, striking a well-balanced compromise between accuracy and privacy.
科研通智能强力驱动
Strongly Powered by AbleSci AI