加入
二次增长
计算机科学
差异(会计)
趋同(经济学)
方案(数学)
期限(时间)
收敛速度
产品(数学)
过程(计算)
方差减少
上下界
人工智能
钥匙(锁)
算法
数学
操作系统
物理
会计
数学分析
业务
几何学
经济
计算机安全
程序设计语言
经济增长
量子力学
作者
Grigory Malinovsky,Samuel Horváth,Konstantin Burlachenko,Peter Richtárik
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:2
标识
DOI:10.48550/arxiv.2302.03662
摘要
Federated Learning (FL) is a distributed machine learning approach where multiple clients work together to solve a machine learning task. One of the key challenges in FL is the issue of partial participation, which occurs when a large number of clients are involved in the training process. The traditional method to address this problem is randomly selecting a subset of clients at each communication round. In our research, we propose a new technique and design a novel regularized client participation scheme. Under this scheme, each client joins the learning process every $R$ communication rounds, which we refer to as a meta epoch. We have found that this participation scheme leads to a reduction in the variance caused by client sampling. Combined with the popular FedAvg algorithm (McMahan et al., 2017), it results in superior rates under standard assumptions. For instance, the optimization term in our main convergence bound decreases linearly with the product of the number of communication rounds and the size of the local dataset of each client, and the statistical term scales with step size quadratically instead of linearly (the case for client sampling with replacement), leading to better convergence rate $\mathcal{O}\left(\frac{1}{T^2}\right)$ compared to $\mathcal{O}\left(\frac{1}{T}\right)$, where $T$ is the total number of communication rounds. Furthermore, our results permit arbitrary client availability as long as each client is available for training once per each meta epoch.
科研通智能强力驱动
Strongly Powered by AbleSci AI