期刊:Journal of The American Helicopter Society [American Helicopter Society] 日期:2025-01-01
标识
DOI:10.4050/jahs.70.022004
摘要
This paper addresses the problem of developing a control algorithm for autonomous ship landing of vertical take-off and landing capable unmanned aerial vehicles (UAVs), using only a monocular camera on the UAV for tracking and localization. Ship landing is challenging due to the small landing space, six degrees of freedom (6-DOF) ship deck motion, limited visual references for localization, and adversarial environmental conditions such as wind gusts. Our approach is motivated by the actual ship landing procedure followed by the Navy helicopter pilots, where they track a known visual cue, the horizon reference bar that is present on most Navy ships. We first develop a computer vision algorithm that estimates the relative position of the UAV with respect to the horizon reference bar on the landing platform using the image stream from a monocular vision camera on the UAV. We then develop a robust reinforcement learning (RL) algorithm for autonomously controlling and navigating the UAV towards the landing platform even in adversarial environmental conditions such as wind gusts. We demonstrate the superior performance and adaptability of our RL algorithm compared to a benchmark nonlinear proportional-integral-derivative control approach via simulations using the Gazebo environment and real-world experiments using a Parrot ANAFI quad-rotor and subscale ship platform undergoing 6-DOF deck motion. The video of the real-world experiments and demonstrations is available in <a href="https://www.youtube.com/watch?v=4SiSVvzDrjg">url: https://www.youtube.com/watch?v=4SiSVvzDrjg</a>.