加速
计算机科学
MNIST数据库
次线性函数
过程(计算)
标杆管理
集合(抽象数据类型)
分布式计算
机器学习
深度学习
并行计算
数学分析
数学
营销
业务
程序设计语言
操作系统
作者
Yikai Yan,Chaoyue Niu,Yucheng Ding,Zhenzhe Zheng,Shaojie Tang,Qinya Li,Fan Wu,Chengfei Lyu,Yanghe Feng,Guihai Chen
出处
期刊:Informs Journal on Computing
日期:2024-01-01
卷期号:36 (1): 185-202
被引量:2
标识
DOI:10.1287/ijoc.2022.0057
摘要
Federated learning is a new distributed machine learning framework, where a bunch of heterogeneous clients collaboratively train a model without sharing training data. In this work, we consider a practical and ubiquitous issue when deploying federated learning in mobile environments: intermittent client availability, where the set of eligible clients may change during the training process. Such intermittent client availability would seriously deteriorate the performance of the classical Federated Averaging algorithm (FedAvg for short). Thus, we propose a simple distributed non-convex optimization algorithm, called Federated Latest Averaging (FedLaAvg for short), which leverages the latest gradients of all clients, even when the clients are not available, to jointly update the global model in each iteration. Our theoretical analysis shows that FedLaAvg attains the convergence rate of $O(E^{1/2}/(N^{1/4} T^{1/2}))$, achieving a sublinear speedup with respect to the total number of clients. We implement FedLaAvg along with several baselines and evaluate them over the benchmarking MNIST and Sentiment140 datasets. The evaluation results demonstrate that FedLaAvg achieves more stable training than FedAvg in both convex and non-convex settings and indeed reaches a sublinear speedup.
科研通智能强力驱动
Strongly Powered by AbleSci AI