计算机科学
深度学习
分布式计算
瓶颈
移动边缘计算
边缘计算
云计算
边缘设备
调度(生产过程)
移动设备
人工智能
GSM演进的增强数据速率
并行计算
嵌入式系统
操作系统
数学优化
数学
作者
Xin Long,Jigang Wu,Yirong Wu,Long Chen
出处
期刊:Parallel and Distributed Computing: Applications and Technologies
日期:2019-12-01
被引量:2
标识
DOI:10.1109/pdcat46702.2019.00022
摘要
Mobile edge computing enables the execution of compute-intensive applications, e.g. deep learning applications, on the end devices with limited computation resources. However, the deep learning applications bring the performance bottleneck in mobile edge computing, due to the movements of a large amount of data incurred by the large number of layers and millions of weights. In this paper, the computing model for parallel deep learning applications in mobile edge computing is proposed, by considering the occupancy allocation of processors, cost of context switch, and multi-processors in edge server and remote cloud. The problem of minimizing the completion time for deep learning applications is formulated, and the NP-hardness of the problem is proved. To solve the problem, an integrated algorithm by merging and scheduling is proposed. Moreover, a real-world distributed platform is developed for evaluating the proposed algorithm. Experimental results show that, the completion time of deep learning application for the proposed algorithm is decreased by 63% and 75%, respectively, without extra control costs, compared with the existing algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI