Multi-robot Cooperative Navigation Method based on Multi-agent Reinforcement Learning in Sparse Reward Tasks
强化学习
计算机科学
人工智能
机器人
人机交互
作者
Kai Li,Quanhu Wang,Mengyao Gong,Jiahui Li,Haobin Shi
标识
DOI:10.1109/isceic59030.2023.10271221
摘要
Multi-robot systems can collaborate to accomplish more complex tasks than a single robot. Cooperative navigation is the basis for multi-robot to complete rescue, reconnaissance, and other tasks in high-risk areas instead of human beings. Multi-agent reinforcement learning (MARL) is the most effective method to control multi-robot cooperation, but the sparsity of rewards limits its application in real scenarios. In this paper, a curiosity-inspired MARL approach which is called CIMADDPG is proposed to promote robot exploration. The global curiosity allocation mechanism is designed to determine each agent's contribution to the global reward. In addition, to ensure that the collaboration of agents is not lost during exploration, the dual critic network is designed to guide the update of the policy network jointly. Finally, the performance of the proposed method is verified in a multi-agent particle environment (MPE) and multi-robot (Turtlebot3) cooperative navigation simulation environment. The experimental results show that CIMADDPG improves the performance of SOTA by 23.53% ~ 48.84% and achieves a high success rate in multi-robot cooperative navigation.