差别隐私
计算机科学
采样(信号处理)
MNIST数据库
上下界
趋同(经济学)
噪音(视频)
信息隐私
比例(比率)
数据挖掘
机器学习
人工智能
深度学习
计算机安全
数学
数学分析
物理
滤波器(信号处理)
量子力学
经济
图像(数学)
计算机视觉
经济增长
作者
Yizhou Chen,Wangjie Xu,Xincheng Wu,Meng Zhang,Bing Luo
标识
DOI:10.1109/icassp48485.2024.10447542
摘要
Differentially Private Federated Learning (DP-FL) is a promising paradigm for training models on large-scale decentralized data under Differential Privacy (DP) guarantees which confronts two challenges: 1) providing a privacy guarantee without sacrificing model performance; 2) tackling system heterogeneity and data heterogeneity under DP. Recent works on DP-FL have focused on uniform client sampling and privacy settings, which neglects the impact of client sampling on the trade-offs between utility, communication, and personalized privacy. This paper proposes a novel Adaptive Client Sampling algorithm for Personalized Local Differentially Private Federated Learning to address these issues. We derive a new convergence bound for non-convex objectives with personalized differential privacy and arbitrary client sampling. We also analyze PLDP with client sampling, maintaining the same level of privacy guarantee with a smaller noise scale. Based on the bound and analysis, we establish the relation between client sampling, privacy bound, and utility bound, resulting in optimization problems for non-convex bound minimization. Simulation and prototype results using MNIST and EMNIST datasets demonstrate that our algorithm is superior to existing baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI