计算机科学
独立同分布随机变量
趋同(经济学)
一般化
一套
参数化(大气建模)
钥匙(锁)
异构网络
联合学习
分布式计算
人工智能
机器学习
随机变量
数学
物理
历史
数学分析
电信
无线网络
辐射传输
经济
考古
量子力学
经济增长
无线
计算机安全
统计
作者
Tian Li,Anit Kumar Sahu,Manzil Zaheer,Maziar Sanjabi,Ameet Talwalkar,Virginia Smith
出处
期刊:Cornell University - arXiv
日期:2018-01-01
被引量:1502
标识
DOI:10.48550/arxiv.1812.06127
摘要
Federated Learning is a distributed learning paradigm with two key challenges that differentiate it from traditional distributed optimization: (1) significant variability in terms of the systems characteristics on each device in the network (systems heterogeneity), and (2) non-identically distributed data across the network (statistical heterogeneity). In this work, we introduce a framework, FedProx, to tackle heterogeneity in federated networks. FedProx can be viewed as a generalization and re-parametrization of FedAvg, the current state-of-the-art method for federated learning. While this re-parameterization makes only minor modifications to the method itself, these modifications have important ramifications both in theory and in practice. Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity). Practically, we demonstrate that FedProx allows for more robust convergence than FedAvg across a suite of realistic federated datasets. In particular, in highly heterogeneous settings, FedProx demonstrates significantly more stable and accurate convergence behavior relative to FedAvg---improving absolute test accuracy by 22% on average.
科研通智能强力驱动
Strongly Powered by AbleSci AI