强化学习
计算机科学
运动规划
稳健性(进化)
人工智能
弹道
规划师
运动学
计算机视觉
避障
运动(物理)
一般化
机器人
移动机器人
数学
生物化学
经典力学
基因
物理
数学分析
化学
天文
作者
Yuntao Xue,Weisheng Chen
出处
期刊:IEEE robotics and automation letters
日期:2023-11-20
卷期号:9 (1): 635-642
被引量:6
标识
DOI:10.1109/lra.2023.3334978
摘要
Navigation of unmanned aerial vehicles (UAVs) in unknown environments is a challenging problem, and it is worth considering how to reach the target through static obstacles in a safe and energy-efficient manner. The traditional motion planning algorithm is easy to get into trouble when the obstacles are dense. The navigation algorithm based on reinforcement learning has better generalization and robustness, but the trajectory generated by the end-to-end method is not smooth and dynamic enough. In this work, a classical motion planning algorithm and deep reinforcement learning (DRL) algorithm are combined named RLPlanNav, which aims to solve the problem of safe and dynamic navigation of UAVs in unknown environments. The upper-layer DRL algorithm part of the framework receives the sensor raw information to generate the next local target, and the lower-layer classical planner generates a smooth and safe trajectory to reach the target. The DRL algorithm incorporates an LSTM network to add memory capabilities, thereby ensuring the effectiveness of local target selections. The proposed navigation framework is tested in a simulated environment where static obstacles are randomly generated, and has higher navigation success rates and more kinematic-compliant navigation trajectories compared to traditional motion planning methods and end-to-end methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI