计算机科学
云计算
趋同(经济学)
架空(工程)
GSM演进的增强数据速率
分布式计算
边缘设备
节点(物理)
边缘计算
绩效改进
人工智能
操作系统
运营管理
结构工程
工程类
经济
经济增长
作者
Long Luo,Chi Zhang,Hongfang Yu,Gang Sun,Shouxi Luo,Schahram Dustdar
出处
期刊:IEEE Transactions on Services Computing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-14
标识
DOI:10.1109/tsc.2024.3399649
摘要
Client-edge-cloud Federated Learning (CEC-FL) is emerging as an increasingly popular FL paradigm, alleviating the performance limitations of conventional cloud-centric Federated Learning (FL) by incorporating edge computing. However, improving training efficiency while retaining model convergence is not easy in CEC-FL. Although controlling aggregation frequency exhibits great promise in improving efficiency by reducing communication overhead, existing works still struggle to simultaneously achieve satisfactory training efficiency and model convergence performance in heterogeneous and dynamic environments. This paper proposes FedAda, a communication-efficient CEC-FL training method that aims to enhance training performance while ensuring model convergence through adaptive aggregation frequency adjustment. To this end, we theoretically analyze the model convergence under aggregation frequency control. Based on this analysis of the relationship between model convergence and aggregation frequencies, we propose an approximation algorithm to calculate aggregation frequencies, considering convergence and aligning with heterogeneous and dynamic node capabilities, ultimately achieving superior convergence accuracy and speed. Simulation results validate the effectiveness and efficiency of FedAda, demonstrating up to 4% improvement in test accuracy, 6.8× shorter training time and 3.3× less communication overhead compared to prior solutions.
科研通智能强力驱动
Strongly Powered by AbleSci AI