符号
强化学习
计算机科学
调度(生产过程)
作业车间调度
人工智能
马尔可夫决策过程
数学优化
马尔可夫过程
数学
算术
地铁列车时刻表
统计
操作系统
作者
Chien‐Liang Liu,Chun-Jan Tseng,Tzu‐Hsuan Huang,Jia‐Hong Wang
出处
期刊:IEEE transactions on systems, man, and cybernetics
[Institute of Electrical and Electronics Engineers]
日期:2023-07-18
卷期号:53 (11): 6792-6804
被引量:7
标识
DOI:10.1109/tsmc.2023.3289322
摘要
Parallel machine scheduling (PMS) is a common setting in many manufacturing facilities, in which each job is allowed to be processed on one of the machines of the same type. It involves scheduling $n$ jobs on $m$ machines to minimize certain objective functions. For preemptive scheduling, most problems are not only NP-hard but also difficult in practice. Moreover, many unexpected events, such as machine failure and requirement change, are inevitable in the practical production process, meaning that rescheduling is required for static scheduling methods. Deep reinforcement learning (DRL), which combines deep learning and reinforcement learning, has achieved promising results in several domains and has shown the potential to solve large Markov decision process (MDP) optimization tasks. Moreover, PMS problems can be formulated as an MDP problem, inspiring us to devise a DRL method to deal with PMS problems in a dynamic environment. We develop a novel DRL-based PMS method, called DPMS, in which the developed model considers the characteristics of PMS to design states and the reward. The actions involve dispatching rules, so DPMS can be considered a meta-dispatching-rule system that can efficiently select a sequence of dispatching rules based on the current environment or unexpected events. The experimental results demonstrate that DPMS can yield promising results in a dynamic environment by learning from the interactions between the agent and the environment. Furthermore, we conduct extensive experiments to analyze DPMS in the context of developing a DRL to deal with dynamic PMS problems.
科研通智能强力驱动
Strongly Powered by AbleSci AI