计算机科学
GSM演进的增强数据速率
加速
联合学习
高效能源利用
边缘计算
移动设备
培训(气象学)
边缘设备
能源消耗
服务器
分布式计算
实时计算
人工智能
操作系统
云计算
电气工程
工程类
生态学
生物
物理
气象学
作者
Yangguang Cui,Kun Cao,Junlong Zhou,Tongquan Wei
标识
DOI:10.23919/date54114.2022.9774662
摘要
Federated Learning (FL), an emerging distributed machine learning (ML), empowers a large number of embedded devices (e.g., phones and cameras) and a server to jointly train a global ML model without centralizing user private data on a server. However, when deploying FL in a mobile-edge computing (MEC) system, restricted communication resources of the MEC system, heterogeneity and constrained energy of user devices have a severe impact on FL training efficiency. To address these issues, in this article, we design a distinctive FL framework, called HELCFL, to achieve high-efficiency and low-cost FL training. Specifically, by analyzing the theoretical foundation of FL, our HELCFL first develops a utility-driven and greedy-decay user selection strategy to enhance FL performance and reduce training delay. Subsequently, by analyzing and utilizing the slack time in FL training, our HELCFL introduces a device operating frequency determination approach to reduce training energy costs. Experiments verify that our HELCFL can enhance the highest accuracy by up to 43.45 %, realize the training speedup of up to 275.03%, and save up to 58.25% training energy costs compared to state-of-the-art baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI