Xixi Huang,Yijuan Ding,Zoe L. Jiang,Shuhan Qi,Xuan Wang,Qing Liao
出处
期刊:World Wide Web [Springer Nature] 日期:2020-04-30卷期号:23 (4): 2529-2545被引量:41
标识
DOI:10.1007/s11280-020-00780-4
摘要
Security issues of artificial intelligence attract many attention in many research fields and industries, such as face recognition, medical care, and client services. Federated learning is proposed by Google, which can prevent the leakage of data during the AI training because each enterprise only needs to exchange training parameters without data sharing. In this paper, we present a novel differentially private federated learning framework (DP-FL) for unbalanced data. In the cloud server, DP-FL framework considers the unbalanced data of different users to set different privacy budgets. In the user client, we design a novel differential private convolutional neural networks with adaptive gradient descent (DPAGD-CNN) algorithm to update each user’s training parameters. Experimental results on several real-world datasets demonstrate that the DF-FL framework can protect data privacy with higher accuracy than existing works.