强化学习
计算机科学
软件部署
功能(生物学)
人工智能
语义学(计算机科学)
航程(航空)
人机交互
工程类
软件工程
进化生物学
生物
航空航天工程
程序设计语言
作者
Kyowoon Lee,Seongun Kim,Jaesik Choi
标识
DOI:10.1109/icra48891.2023.10160371
摘要
For robotic vehicles to navigate robustly and safely in unseen environments, it is crucial to decide the most suitable navigation policy. However, most existing deep reinforcement learning based navigation policies are trained with a hand-engineered curriculum and reward function which are difficult to be deployed in a wide range of real-world scenarios. In this paper, we propose a framework to learn a family of low-level navigation policies and a high-level policy for deploying them. The main idea is that, instead of learning a single navigation policy with a fixed reward function, we simultaneously learn a family of policies that exhibit different behaviors with a wide range of reward functions. We then train the high-level policy which adaptively deploys the most suitable navigation skill. We evaluate our approach in simulation and the real world and demonstrate that our method can learn diverse navigation skills and adaptively deploy them. We also illustrate that our proposed hierarchical learning framework presents explainability by providing semantics for the behavior of an autonomous agent.
科研通智能强力驱动
Strongly Powered by AbleSci AI