差别隐私
计算机科学
联合学习
趋同(经济学)
分布式学习
人工智能
噪音(视频)
机器学习
信息隐私
方案(数学)
激励
分布式计算
数据挖掘
计算机安全
经济增长
图像(数学)
心理学
数学分析
经济
微观经济学
数学
教育学
作者
Zhipeng Gao,Yingwen Duan,Yang Yang,Lanlan Rui,Zhao Chen
标识
DOI:10.1109/wcnc51071.2022.9771929
摘要
Federated learning (FL) is considered to be a promising paradigm to solve data privacy disclosure in large-scale machine learning. To further enhance the privacy protection of federated learning, prior works incorporate the differentially private data perturbation into the federated system. But it is not feasible given the impairment of the model from noise, as adding Gaussian noise to achieve differential privacy (DP) deteriorates the accuracy of the model. In particular, the assumption that the sophisticated system is homogeneous is not realistic for real scenarios. Heterogeneous networks exacerbate noise disruptions. In this paper, we present FedSeC, a novel differential private federated learning (DP-FL) framework which operates with robust convergence and high-accuracy while achieving adequate privacy protection. FedSeC improves upon naive combinations of federated learning and differential privacy approaches with an updates-based optimization of relative-staleness and semi-synchronous approach for fast convergence in heterogeneous networks. Moreover, we propose a valid client selection scheme to trade-off fair resource allocation and discriminatory incentives. Through extensive experimental validation of our method in three different heterogeneities, we show that FedSeC outperforms the previous state-of-the-art method.
科研通智能强力驱动
Strongly Powered by AbleSci AI