异步通信
计算机科学
延迟(音频)
计算
人工智能
互联网
机器学习
分布式计算
计算机网络
万维网
算法
电信
作者
Shuai Chen,Xiumin Wang,Pan Zhou,Weiwei Wu,Weiwei Lin,Zhenyu Wang
出处
期刊:IEEE transactions on emerging topics in computational intelligence
[Institute of Electrical and Electronics Engineers]
日期:2022-02-15
卷期号:6 (5): 1113-1124
被引量:10
标识
DOI:10.1109/tetci.2022.3146871
摘要
Federated learning (FL) has recently received significant attention in Internet of Things, due to its capability of enabling multiple clients to collaboratively train machine learning models using neural networks, without sharing their privacy-sensitive data. However, due to the heterogeneity of clients in their computation and communication capability, they might not return the training model to the server at the same time, which may result in high waiting latency at the server, especially in synchronous FL. Although asynchronous FL can reduce the waiting latency, aggregating global model in a completely asynchronous way may lead to some local models out of date, resulting in low training accuracy. To address the above issues, this paper aims to propose a novel Heterogeneous Semi-Asynchronous FL mechanism, named HSA_FL . Firstly, we use a Multi-Armed Bandit (MAB) approach to identify the heterogenous communication and computation capabilities of clients, based on which, we assign different training intensities to clients. Generally, the clients with lower capabilities will be assigned with less number of local updates. In addition, instead of waiting all the clients to return their training models or immediately aggregation after getting a single local model, this paper proposes two aggregation rules, named adaptive update and fixed adaptive, respectively. Finally, simulation results show that the proposed scheme can effectively reduce the training time and improve the training accuracy as compared with some benchmark algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI