计算机科学
服务器
GSM演进的增强数据速率
瓶颈
分布式计算
边缘计算
边缘设备
架空(工程)
云计算
联合学习
计算机网络
人工智能
嵌入式系统
操作系统
作者
Suo Chen,Zhenguo Ma,Zhiyuan Wang
标识
DOI:10.1145/3603781.3604232
摘要
Federated Learning (FL) has gained significant popularity as a means of handling large scale of data in Edge Computing (EC) applications. Due to the frequent communication between edge devices and server, the parameter server based framework for FL may suffer from the communication bottleneck and lead to a degraded training efficiency. As an alternative solution, Hierarchical Federated Learning (HFL), which leverages edge servers as intermediaries to perform model aggregation among devices in proximity, comes into being. However, the existing HFL solutions fail to perform effective training considering the constrained and heterogeneous communication resources on edge devices. In this paper, we design a communication-efficient HFL framework, named CE-HFL, to accelerate the convergence of HFL. Concretely, we propose to adjust the global and edge aggregation frequencies in HFL according to heterogeneous communication resources among edge devices. By performing multiple local updating before communication, the communication overhead on edge servers and the cloud server can be significantly reduced. The experimental results on real-world dataset demonstrate the effectiveness of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI