差别隐私
MNIST数据库
计算机科学
联合学习
灵敏度(控制系统)
信息敏感性
噪音(视频)
方案(数学)
私人信息检索
信息隐私
人工智能
机器学习
分布式学习
比例(比率)
数据挖掘
分布式计算
深度学习
计算机安全
图像(数学)
数学分析
工程类
物理
心理学
量子力学
数学
电子工程
教育学
作者
Rui Xue,Kaiping Xue,Bin Zhu,Xinyi Luo,Tianwei Zhang,Qibin Sun,Jun Lü
标识
DOI:10.1109/tifs.2023.3318944
摘要
Federated Learning (FL) enables multiple distributed clients to collaboratively train a model with owned datasets. To avoid the potential privacy threat in FL, researchers propose the DP-FL strategy, which utilizes differential privacy (DP) to add elaborate noise to the exchanged parameters to hide privacy information. DP-FL guarantees the privacy of FL at the cost of model performance degradation. To balance the trade-off between model accuracy and security, we propose a differentially private federated learning scheme with an adaptive noise mechanism. This is challenging, as the distributed nature of FL makes it difficult to appropriately estimate sensitivity, where sensitivity is a concept in DP that determines the scale of noise. To resolve this, we design a generic method for sensitivity estimates based on local and global historical information. We also provide instances on four commonly used optimizers to verify its effectiveness. The experiments on MNIST, FMNIST and CIFAR-10 convincingly prove that our proposed scheme achieves higher accuracy while keeping high-level privacy protection compared to prior works.
科研通智能强力驱动
Strongly Powered by AbleSci AI