计算机科学
联合学习
水准点(测量)
延迟(音频)
趋同(经济学)
一般化
收敛速度
人工智能
班级(哲学)
机器学习
分布式计算
计算机网络
频道(广播)
数学
数学分析
经济
电信
地理
经济增长
大地测量学
作者
Yue Tan,Guodong Long,Lu Liu,Tianyi Zhou,Qinghua Lu,Jing Jiang,Chengqi Zhang
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence
[Association for the Advancement of Artificial Intelligence (AAAI)]
日期:2022-06-28
卷期号:36 (8): 8432-8440
被引量:169
标识
DOI:10.1609/aaai.v36i8.20819
摘要
Heterogeneity across clients in federated learning (FL) usually hinders the optimization convergence and generalization performance when the aggregation of clients' knowledge occurs in the gradient space. For example, clients may differ in terms of data distribution, network latency, input/output space, and/or model architecture, which can easily lead to the misalignment of their local gradients. To improve the tolerance to heterogeneity, we propose a novel federated prototype learning (FedProto) framework in which the clients and server communicate the abstract class prototypes instead of the gradients. FedProto aggregates the local prototypes collected from different clients, and then sends the global prototypes back to all clients to regularize the training of local models. The training on each client aims to minimize the classification error on the local data while keeping the resulting local prototypes sufficiently close to the corresponding global ones. Moreover, we provide a theoretical analysis to the convergence rate of FedProto under non-convex objectives. In experiments, we propose a benchmark setting tailored for heterogeneous FL, with FedProto outperforming several recent FL approaches on multiple datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI