期刊:IEEE robotics and automation letters日期:2023-11-20卷期号:9 (1): 635-642被引量:6
标识
DOI:10.1109/lra.2023.3334978
摘要
Navigation of unmanned aerial vehicles (UAVs) in unknown environments is a challenging problem, and it is worth considering how to reach the target through static obstacles in a safe and energy-efficient manner. The traditional motion planning algorithm is easy to get into trouble when the obstacles are dense. The navigation algorithm based on reinforcement learning has better generalization and robustness, but the trajectory generated by the end-to-end method is not smooth and dynamic enough. In this work, a classical motion planning algorithm and deep reinforcement learning (DRL) algorithm are combined named RLPlanNav, which aims to solve the problem of safe and dynamic navigation of UAVs in unknown environments. The upper-layer DRL algorithm part of the framework receives the sensor raw information to generate the next local target, and the lower-layer classical planner generates a smooth and safe trajectory to reach the target. The DRL algorithm incorporates an LSTM network to add memory capabilities, thereby ensuring the effectiveness of local target selections. The proposed navigation framework is tested in a simulated environment where static obstacles are randomly generated, and has higher navigation success rates and more kinematic-compliant navigation trajectories compared to traditional motion planning methods and end-to-end methods.