计算机科学
云计算
利用
GSM演进的增强数据速率
分布式计算
加速
边缘设备
高效能源利用
移动设备
培训(气象学)
人工智能
操作系统
计算机安全
气象学
工程类
物理
电气工程
作者
Yangguang Cui,Kun Cao,Junlong Zhou,Tongquan Wei
标识
DOI:10.1109/tcad.2022.3205551
摘要
Federated learning (FL), an emerging distributed machine learning (ML) technique, allows massive embedded devices and a server to work together for training a global ML model without collecting user data on a server. Most existing approaches adopt the traditional centralized FL paradigm with a single server: one is the cloud-centric FL paradigm and the other is the edge-centric FL paradigm. The cloud-centric FL paradigm is able to manage a large-scale FL system across massive user devices with high communication cost, whereas the edge-centric FL paradigm is capable of coordinating a small-scale FL system benefiting from the low communication delay over wireless networks. To fully exploit the advantages of both, in this article, we develop a distinctive hierarchical FL framework for the promising mobile-edge cloud computing (MECC) system, called HELCHFL, to achieve high-efficiency and low-cost hierarchical FL training. In particular, we formulate the corresponding theoretical foundation for our HELCHFL to ensure hierarchical training performance. Furthermore, to address the inherent communication and user heterogeneity issues of FL training, our HELCHFL develops a utility-driven and heterogeneity-aware heuristic user selection strategy to enhance training performance and reduce training delay. Subsequently, by analyzing and utilizing the slack time in FL training, our HELCHFL introduces a device operating frequency determination approach to reduce training energy cost. Experiments demonstrate that our HELCHFL can enhance the highest accuracy by up to 52.93%, gain the training speedup of up to 483.74%, and obtain up to 45.59% training energy savings compared to state-of-the-art baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI