计算机科学
强化学习
异步通信
云计算
调度(生产过程)
分布式计算
能源消耗
人工智能
趋同(经济学)
实时计算
机器学习
数学优化
计算机网络
操作系统
生物
经济
经济增长
数学
生态学
作者
Kaixuan Kang,Ding Ding,Huamao Xie,Lihong Zhao,Yinong Li,Yixuan Xie
标识
DOI:10.1016/j.future.2024.01.002
摘要
Studies of resource provision in cloud computing have drawn extensive attention, since effective task scheduling solutions promise an energy-efficient way of utilizing resources while meeting diverse requirements of users. Deep reinforcement learning (DRL) has demonstrated its outstanding capability in tackling this issue with the ability of online self-learning, however, it is still prevented by the low sampling efficiency, poor sample validity, and slow convergence speed especially for deadline constrained applications. To address these challenges, an Imitation Learning Enabled Fast and Adaptive Task Scheduling (ILETS) framework based on DRL is proposed in this paper. First, we introduce behavior cloning to provide a well-behaved and robust model through Offline Initial Network Parameters Training (OINPT) so as to guarantee the initial decision-making quality of DRL. Next, we design a novel Online Asynchronous Imitation Learning (OAIL)-based method to assist the DRL agent to re-optimize its policy and to against the oscillations caused by the high dynamic of the cloud, which promises DRL agent moving towards the optimal policy with a fast and stable process. Extensive experiments on the real-world dataset have demonstrated that the proposed ILETS can consistently produce shorter response time, lower energy consumption and higher success rate than the baselines and other state-of-the-art methods at the accelerated convergence speed.
科研通智能强力驱动
Strongly Powered by AbleSci AI