强化学习
计算机科学
运动规划
人工智能
自主学习
路径(计算)
人机交互
机器学习
机器人
数学
数学教育
计算机网络
作者
Yuting Zhou,Junchao Yang,Zhiwei Guo,Yu Shen,Keping Yu,Jerry Chun‐Wei Lin
标识
DOI:10.1016/j.eswa.2024.124277
摘要
Deep reinforcement learning (DRL) provides a new solution for autonomous robotic path planning in a known indoor environment. Previous studies mainly focused on robot path optimization but ignored blind areas in the indoor exploration, naturally result in low coverage rate and low exploration efficiency. The blind areas exploration is a crucial issue in indoor environment. This work proposes an indoor blind area-oriented autonomous robotic path planning approach using DRL methods. First, the method optimization is based on a double deep Q-Network (DDQN) with prioritized experience replay (PER). Then the Blocking and Blind Angle mechanism (BBA) is proposed to explore blind areas, assisted in selecting the optimal exploration points of next moment. Meanwhile it solves the common sparse reward problem in DRL. Finally, the presented method is successfully applied in simulation environment using a cleaning robot. Experiments show that the proposed BBA-PER-DDQN not only explores the blind areas, but also accelerates the convergence speed. The results show that the training time is reduced from more than one hour to 36 min, and the coverage rate is increased by 11.37% higher than that of the baseline algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI