计算机科学
差别隐私
利用
联合学习
MNIST数据库
信息泄露
推论
机器学习
趋同(经济学)
隐私保护
私人信息检索
人工智能
钥匙(锁)
信息敏感性
信息隐私
数据挖掘
计算机安全
深度学习
经济
经济增长
作者
Taiyu Wang,Qinglin Yang,Kaiming Zhu,Junbo Wang,Chunhua Su,Kento Sato
标识
DOI:10.1109/tifs.2023.3322328
摘要
Federated Learning (FL) has attracted extraordinary attention from the industry and academia due to its advantages in privacy protection and collaboratively training on isolated datasets. Since machine learning algorithms usually try to find an optimal hypothesis to fit the training data, attackers also can exploit the shared models and reversely analyze users’ private information. However, there is still no good solution to solve the privacy-accuracy trade-off, by making information leakage more difficult and meanwhile can guarantee the convergence of learning. In this work, we propose a Loss Differential Strategy (LDS) for parameter replacement in FL. The key idea of our strategy is to maintain the performance of the Private Model to be preserved through parameter replacement with multi-user participation, while the efficiency of privacy attacks on the model can be significantly reduced. To evaluate the proposed method, we have conducted comprehensive experiments on four typical machine learning datasets to defend against membership inference attack. For example, the accuracy on MNIST is near 99%, while it can reduce the accuracy of attack by 10.1% compared with FedAvg. Compared with other traditional privacy protection mechanisms, our method also outperforms them in terms of accuracy and privacy preserving.
科研通智能强力驱动
Strongly Powered by AbleSci AI