计算机科学
差别隐私
趋同(经济学)
联合学习
方案(数学)
功能(生物学)
随机梯度下降算法
正多边形
梯度下降
数学优化
分布式计算
理论计算机科学
人工智能
算法
数学
人工神经网络
数学分析
几何学
进化生物学
经济
生物
经济增长
作者
Kang Wei,Jun Li,Chuan Ma,Ming Ding,Wen Chen,Jun Wu,Meixia Tao,H. Vincent Poor
标识
DOI:10.1109/tifs.2023.3293417
摘要
Personalized federated learning (PFL), as a novel federated learning (FL) paradigm, is capable of generating personalized models for heterogenous clients. Combined with with a meta-learning mechanism, PFL can further improve the convergence performance with few-shot training. However, meta-learning based PFL has two stages of gradient descent in each local training round, therefore posing a more serious challenge in information leakage. In this paper, we propose a differential privacy (DP) based PFL (DP-PFL) framework and analyze its convergence performance. Specifically, we first design a privacy budget allocation scheme for inner and outer update stages based on the Rényi DP composition theory. Then, we develop two convergence bounds for the proposed DP-PFL framework under convex and non-convex loss function assumptions, respectively. Our developed convergence bounds reveal that 1) there is an optimal size of the DP-PFL model that can achieve the best convergence performance for a given privacy level, and 2) there is an optimal tradeoff among the number of communication rounds, convergence performance and privacy budget. Evaluations on various real-life datasets demonstrate that our theoretical results are consistent with experimental results. The derived theoretical results can guide the design of various DP-PFL algorithms with configurable tradeoff requirements on the convergence performance and privacy levels.
科研通智能强力驱动
Strongly Powered by AbleSci AI