多转子
计算机科学
激光雷达
计算机视觉
人工智能
航空学
遥感
实时计算
航空航天工程
工程类
地质学
作者
Jeonggeun Lim,M.C. Kim,Hyungwook Yoo,Jongho Lee
出处
期刊:IEEE-ASME Transactions on Mechatronics
[Institute of Electrical and Electronics Engineers]
日期:2024-03-13
卷期号:29 (5): 3960-3970
被引量:1
标识
DOI:10.1109/tmech.2024.3369028
摘要
Autonomous or manual aerial vehicles should be able to land safely after conducting missions. While human pilots can determine safe landing spots for manned or remote-controlled aerial vehicles, unmanned aerial vehicles (UAVs) need to autonomously evaluate their surrounding environments to land safely. In this article, we present fully autonomous strategies for searching for safe landable spots and landing. This approach combines sensor readings from a camera with light detection and rangings (LiDARs) data. The class-wise complementary criteria enables safe landable regions to be determined, based on slope extraction from the LiDAR points cloud and semantic segmentation from deep learning using camera images. All the required components including algorithms, heterogeneous sensors, and processors were implemented on a multirotor UAV for standalone operation. Real-time outdoor experiments demonstrated fully autonomous search and landing on safe spots in various environments that included water, grass, trees, and shadows.
科研通智能强力驱动
Strongly Powered by AbleSci AI