计算机科学
GSM演进的增强数据速率
平行性(语法)
边缘计算
数据并行性
任务并行性
计算机体系结构
人工智能
并行计算
作者
Yunming Liao,Yang Xu,Hongli Xu,Zhiwei Yao,Lun Wang,Chunming Qiao
出处
期刊:IEEE ACM Transactions on Networking
[Institute of Electrical and Electronics Engineers]
日期:2023-08-24
卷期号:32 (1): 904-918
被引量:13
标识
DOI:10.1109/tnet.2023.3299851
摘要
Recently, edge AI has been launched to mine and discover valuable knowledge at network edge. Federated Learning, as an emerging technique for edge AI, has been widely deployed to collaboratively train models on many end devices in data-parallel fashion. To alleviate the computation/communication burden on the resource-constrained workers (e.g., end devices) and protect user privacy, Spilt Federated Learning (SFL), which integrates both data parallelism and model parallelism in Edge Computing (EC), is becoming a practical and popular approach for model training over distributed data. However, apart from the resource limitation, SFL still faces two other critical challenges in EC, i.e., system heterogeneity and context dynamics. To overcome these challenges, we present an efficient SFL method, named AdaSFL, which controls both local updating frequency and batch size to better accelerate model training. We theoretically analyze the model convergence rate and obtain a convergence upper bound regarding local updating frequency given a fixed batch size. Upon this, we develop a control algorithm to determine adaptive local updating frequency and diverse batch sizes for heterogeneous workers to enhance the training efficiency. The experimental results show that AdaSFL can reduce the completion time by about 43% and the network traffic consumption by about 31% for achieving the similar test accuracy, compared to the baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI