强化学习
计算机科学
调度(生产过程)
过度拟合
动态优先级调度
两级调度
作业车间调度
人工智能
服务质量
工程类
人工神经网络
运营管理
嵌入式系统
计算机网络
布线(电子设计自动化)
作者
Xiaohan Wang,Lin Zhang,Yongkui Liu,Yuanjun Laili
标识
DOI:10.1080/00207543.2023.2253326
摘要
Dynamic task scheduling problem in cloud manufacturing (CMfg) is always challenging because of changing manufacturing requirements and services. To make instant decisions for task requirements, deep reinforcement learning-based (DRL-based) methods have been broadly applied to learn the scheduling policies of service providers. However, the current DRL-based scheduling methods struggle to fine-tune a pre-trained policy effectively. The resulting training from scratch takes more time and may easily overfit the environment. Additionally, most DRL-based methods with uneven action distribution and inefficient output masks largely reduce the training efficiency, thus degrading the solution quality. To this end, this paper proposes an improved DRL-based approach for dynamic task scheduling in CMfg. First, the paper uncovers the causes behind the inadequate fine-tuning ability and low training efficiency observed in existing DRL-based scheduling methods. Subsequently, a novel approach is proposed to address these issues by updating the scheduling policy while considering the distribution distance between the pre-training dataset and the in-training policy. Uncertainty weights are introduced to the loss function, and the output mask is extended to the updating procedures. Numerical experiments on thirty actual scheduling instances validate that the solution quality and generalization of the proposed approach surpass other DRL-based methods at most by 32.8% and 28.6%, respectively. Additionally, our method can effectively fine-tune a pre-trained scheduling policy, resulting in an average reward increase of up to 23.8%.
科研通智能强力驱动
Strongly Powered by AbleSci AI