计算机科学
强化学习
交错
云计算
调度(生产过程)
分布式计算
边缘设备
任务(项目管理)
GSM演进的增强数据速率
人工智能
操作系统
数学优化
系统工程
数学
工程类
作者
Xinglong Pei,Penghao Sun,Yuxiang Hu,Dan Li,Lihua Tian,Ziyong Li
标识
DOI:10.1016/j.future.2024.06.033
摘要
Collaborative cloud–edge computing has been systematically developed to balance the efficiency and cost of computing tasks for many emerging technologies. To improve the overall performance of cloud–edge system, existing works have made progress in task scheduling by dynamically distributing the tasks with different latency thresholds to edge and cloud nodes. However, the relationship of multi-resource queueing among different tasks within a node is not well studied, which leaves the merit of optimizing the multi-resource queueing unexplored. To fill this gap and improve the efficiency of cloud–edge system, we propose DeepMIC, a deep reinforcement learning (DRL)-based multi-resource interleaving scheme for task scheduling in cloud–edge system. First, we formulate a multi-resource queueing model aiming at minimizing the weighted-sum delay of the pending tasks. The proposed model jointly considers the requests for computation, caching, and forwarding resources within a node based on the network information collected through Software-Defined Networking (SDN) and the management framework of Mobile Edge Computing (MEC). Then, we customize a DRL algorithm to ensure a timely solution of the model, which caters to the high throughput of tasks. Finally, we demonstrate that through the flexible scheduling of the tasks, DeepMIC reduces the average task response time and achieves better resource utilization.
科研通智能强力驱动
Strongly Powered by AbleSci AI