强化学习
规划师
计算机科学
避障
多样性(控制论)
障碍物
最大值和最小值
移动机器人
运动规划
国家(计算机科学)
控制(管理)
人工智能
移动机器人导航
机器人
导航系统
航程(航空)
实时计算
机器人控制
工程类
地理
数学分析
数学
考古
算法
航空航天工程
作者
Linh Kästner,Johannes Cox,Teham Buiyan,Jens Lambrecht
标识
DOI:10.1109/icra46639.2022.9811797
摘要
Autonomous navigation of mobile robots is an es-sential aspect in use cases such as delivery, assistance or logistics. Although traditional planning methods are well integrated into existing navigation systems, they struggle in highly dynamic en-vironments. On the other hand, Deep-Reinforcement-Learning-based methods show superior performance in dynamic obstacle avoidance but are not suitable for long-range navigation and struggle with local minima. In this paper, we propose a Deep-Reinforcement-Learning-based control switch, which has the ability to select between different planning paradigms based solely on sensor data observations. Therefore, we develop an interface to efficiently operate multiple model-based, as well as learning-based local planners and integrate a variety of state-of-the-art planners to be selected by the control switch. Subsequently, we evaluate our approach against each planner individually and found improvements in navigation performance especially for highly dynamic scenarios. Our planner was able to prefer learning-based approaches in situations with a high number of obstacles while relying on the traditional model-based planners in long corridors or empty spaces.
科研通智能强力驱动
Strongly Powered by AbleSci AI