计算机科学
推论
延迟(音频)
边缘设备
人工智能
无线网络
人工神经网络
无线
分布式计算
还原(数学)
机器学习
云计算
几何学
数学
电信
操作系统
作者
Ce Xu,Jinxuan Li,Yuan Liu,Yushi Ling,Miaowen Wen
标识
DOI:10.1109/twc.2023.3327372
摘要
The development of artificial intelligence (AI) provides opportunities for the promotion of deep neural network (DNN)-based applications. However, the large amount of parameters and computational complexity of DNN makes it difficult to deploy it on edge devices which are resource-constrained. An efficient method to address this challenge is model partition/splitting, in which DNN is divided into two parts which are deployed on device and server respectively for co-training or co-inference. In this paper, we consider a split federated learning (SFL) framework that combines the parallel model training mechanism of federated learning (FL) and the model splitting structure of split learning (SL). We consider a practical scenario of heterogeneous devices with individual split points of DNN. We formulate a joint problem of split point selection and bandwidth allocation to minimize the system latency. By using alternating optimization, we decompose the problem into two sub-problems and solve them optimally. Experiment results demonstrate the superiority of our work in latency reduction and accuracy improvement.
科研通智能强力驱动
Strongly Powered by AbleSci AI