计算机科学
强化学习
调度(生产过程)
人工智能
作业车间调度
工作车间
机器学习
嵌入
感知器
代表(政治)
一般化
流水车间调度
数学优化
人工神经网络
地铁列车时刻表
数学
数学分析
法学
操作系统
政治
政治学
作者
Erdong Yuan,Liejun Wang,Shuli Cheng,Shiji Song,Wei Fan,Yongming Li
标识
DOI:10.1016/j.eswa.2023.123019
摘要
Flexible job shop scheduling problem (FJSSP), as a variant of the job shop scheduling problem, has a larger solution space. Researchers are always looking for good methods to solve this problem. In recent years, the deep reinforcement learning(DRL) has been applied to solve various shop scheduling problems due to its advantages that fast solving speed and strong generalization ability. In this paper, we first propose a new DRL framework to realize representation learning and policy learning. The new framework adopts a lightweight multi-layer perceptron (MLP) as the state embedding network to extract state information, which reduces the computational complexity of the algorithm to some extent. Next, we design a new state representation and define a new action space. The new state representation can directly reflect the state features of candidate actions, which is conducive for the agent to capture more effective state information and improve its decision-making ability. The new definition of action space can solve the two subproblems of the FJSSP simultaneously with only one action space. Finally, we evaluate the performance of the policy model on four public datasets: Barnes dataset, Brandimarte dataset, Dauzere dataset and Hurink dataset. Extensive experimental results on these public datasets show that the proposed method achieves a better compromise in terms of optimization ability and applicability compared to the composite priority dispatching rules and the existing state-of-the-art models.
科研通智能强力驱动
Strongly Powered by AbleSci AI