强化学习
调度(生产过程)
计算机科学
马尔可夫决策过程
机器人
任务(项目管理)
人工智能
动作选择
增强学习
运筹学
机器学习
马尔可夫过程
工程类
运营管理
统计
神经科学
生物
系统工程
数学
感知
作者
Yu Tian,Jing Huang,Yu Tian
标识
DOI:10.1016/j.jmsy.2021.07.015
摘要
Human-Robot Collaboration (HRC) presents an opportunity to improve the efficiency of manufacturing processes. However, the existing task planning approaches for HRC are still limited in many ways, e.g., co-robot encoding must rely on experts’ knowledge and the real-time task scheduling is applicable within small state-action spaces or simplified problem settings. In this paper, the HRC assembly working process is formatted into a novel chessboard setting, in which the selection of chess piece move is used to analogize to the decision making by both humans and robots in the HRC assembly working process. To optimize the completion time, a Markov game model is considered, which takes the task structure and the agent status as the state input and the overall completion time as the reward. Without experts’ knowledge, this game model is capable of seeking for correlated equilibrium policy among agents with convergency in making real-time decisions facing a dynamic environment. To improve the efficiency in finding an optimal policy of the task scheduling, a deep-Q-network (DQN) based multi-agent reinforcement learning (MARL) method is applied and compared with the Nash-Q learning, dynamic programming and the DQN-based single-agent reinforcement learning method. A height-adjustable desk assembly is used as a case study to demonstrate the effectiveness of the proposed algorithm with different number of tasks and agents.
科研通智能强力驱动
Strongly Powered by AbleSci AI