计算机科学
强化学习
云计算
调度(生产过程)
能源消耗
分布式计算
马尔可夫决策过程
两级调度
动态优先级调度
人工智能
马尔可夫过程
地铁列车时刻表
操作系统
数学优化
生态学
统计
数学
生物
作者
Huanhuan Hou,Siti Nuraishah Agos Jawaddi,Azlan Ismail
标识
DOI:10.1016/j.future.2023.10.002
摘要
The expanding scale of cloud data centers and the diversification of user services have led to an increase in energy consumption and greenhouse gas emissions, resulting in long-term detrimental effects on the environment. To address this issue, scheduling techniques that reduce energy usage have become a hot topic in cloud computing and cluster management. The Deep Reinforcement Learning (DRL) approach, which combines the advantages of Deep Learning and Reinforcement Learning, has shown promise in resolving scheduling problems in cloud computing. However, reviews of the literature on task scheduling that employ DRL techniques for reducing energy consumption are limited. In this paper, we survey and analyze energy consumption models used for scheduling goals, provide an overview of the DRL algorithms used in the literature, and quantitatively compare the model differences of Markov Decision Process elements. We also summarize the experimental platforms, datasets, and neural network structures used in the DRL algorithm. Finally, we analyze the research gap in DRL-based task scheduling and discuss existing challenges as well as future directions from various aspects. This paper contributes to the correlation perspective on the task scheduling problem with the DRL approach and provides a reference for in-depth research on the direction of DRL-based task scheduling research. Our findings suggest that DRL-based scheduling techniques can significantly reduce energy consumption in cloud data centers, making them a promising area for further investigation.
科研通智能强力驱动
Strongly Powered by AbleSci AI