计算机科学
服务器
方案(数学)
趋同(经济学)
GSM演进的增强数据速率
异步通信
边缘设备
分布式计算
边缘计算
人工智能
机器学习
计算机网络
操作系统
云计算
数学分析
数学
经济
经济增长
作者
Kaibin Wang,Qiang He,Feifei Chen,Hai Jin,Yun Yang
标识
DOI:10.1145/3543507.3583264
摘要
Federated learning (FL) has been widely acknowledged as a promising solution to training machine learning (ML) model training with privacy preservation. To reduce the traffic overheads incurred by FL systems, edge servers have been included between clients and the parameter server to aggregate clients’ local models. Recent studies on this edge-assisted hierarchical FL scheme have focused on ensuring or accelerating model convergence by coping with various factors, e.g., uncertain network conditions, unreliable clients, heterogeneous compute resources, etc. This paper presents our three new discoveries of the edge-assisted hierarchical FL scheme: 1) it wastes significant time during its two-phase training rounds; 2) it does not recognize or utilize model diversity when producing a global model; and 3) it is vulnerable to model poisoning attacks. To overcome these drawbacks, we propose FedEdge, a novel edge-assisted hierarchical FL scheme that accelerates model training with asynchronous local federated training and adaptive model aggregation. Extensive experiments are conducted on two widely-used public datasets. The results demonstrate that, compared with state-of-the-art FL schemes, FedEdge accelerates model convergence by 1.14 × −3.20 ×, and improves model accuracy by 2.14% - 6.63%.
科研通智能强力驱动
Strongly Powered by AbleSci AI