计算机科学
人工智能
避障
任务(项目管理)
机器人
机器人学
等级制度
移动机器人导航
计算机视觉
导航系统
移动机器人
人机交互
机器人控制
工程类
经济
系统工程
市场经济
作者
Linhai Xie,Andrew Markham,Niki Trigoni
标识
DOI:10.1109/icra40945.2020.9197523
摘要
Learning-based visual navigation still remains a challenging problem in robotics, with two overarching issues: how to transfer the learnt policy to unseen scenarios, and how to deploy the system on real robots. In this paper, we propose a deep neural network based visual navigation system, SnapNav. Unlike map-based navigation or Visual-Teach-and-Repeat (VT&R), SnapNav only receives a few snapshots of the environment combined with directional guidance to allow it to execute the navigation task. Additionally, SnapNav can be easily deployed on real robots due to a two-level hierarchy: a high level commander that provides directional commands and a low level controller that provides real-time control and obstacle avoidance. This also allows us to effectively use simulated and real data to train the different layers of the hierarchy, facilitating robust control. Extensive experimental results show that SnapNav achieves a highly autonomous navigation ability compared to baseline models, enabling sparse, map-less navigation in previously unseen environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI