计算机科学
节点(物理)
加权
趋同(经济学)
编配
人工智能
理论计算机科学
经济增长
结构工程
医学
放射科
工程类
艺术
视觉艺术
经济
音乐剧
出处
期刊:IEEE Transactions on Cognitive Communications and Networking
[Institute of Electrical and Electronics Engineers]
日期:2021-12-01
卷期号:7 (4): 1078-1088
被引量:68
标识
DOI:10.1109/tccn.2021.3084406
摘要
Federated learning (FL) enables resource-constrained edge nodes to collaboratively learn a global model under the orchestration of a central server while keeping privacy-sensitive data locally. The non-independent-and-identically-distributed (non-IID) data samples across participating nodes slow model training and impose additional communication rounds for FL to converge. In this paper, we propose Fed erated Ad a p tive Weighting ( FedAdp ) algorithm that aims to accelerate model convergence under the presence of nodes with non-IID dataset. We observe the implicit connection between the node contribution to the global model aggregation and data distribution on the local node through theoretical and empirical analysis. We then propose to assign different weights for updating the global model based on node contribution adaptively through each training round. The contribution of participating nodes is first measured by the angle between the local gradient vector and the global gradient vector, and then, weight is quantified by a designed non-linear mapping function subsequently. The simple yet effective strategy can reinforce positive (suppress negative) node contribution dynamically, resulting in communication round reduction drastically. Its superiority over the commonly adopted Federated Averaging ( FedAvg ) is verified both theoretically and experimentally. With extensive experiments performed in Pytorch and PySyft, we show that FL training with FedAdp can reduce the number of communication rounds by up to 54.1% on MNIST dataset and up to 45.4% on FashionMNIST dataset, as compared to FedAvg algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI