机器人
计算机科学
人工智能
计算机视觉
激光雷达
公制(单位)
比例(比率)
地形
惯性测量装置
集合(抽象数据类型)
全球地图
树(集合论)
遥感
数学
地理
工程类
地图学
运营管理
程序设计语言
数学分析
作者
Ankit Prabhu,Xu Liu,Igor Spasojevic,Yuwei Wu,Yifei Shao,Dexter Ong,Jiuzhou Lei,Patrick Corey Green,Pratik Chaudhari,Vijay Kumar
标识
DOI:10.1016/j.ymssp.2023.111050
摘要
To properly monitor the growth of forests and administer effective methods for their cultivation, forestry researchers require access to quantitative metrics such as diameter at breast height and stem taper profile of trees. These metrics are tedious and labor-intensive to measure by hand, especially at the scale of vast forests with thick undergrowth. Autonomous mobile robots can help to scale up such operations and provide an efficient method to capture the data. We present a set of algorithms for autonomous navigation and fine-grained metric-semantic mapping with a team of aerial robots in under-canopy forest environments. Our autonomous UAV system has 3D flight capabilities and relies only on a LIDAR and an IMU for state estimation and mapping. This allows each robot to accurately navigate in challenging forest environments with drastic terrain changes regardless of illumination conditions. Our deep-learning-driven fine-grained metric-semantic mapping module is capable of detecting and extracting detailed information such as the position, orientation, and stem taper profile of trees. This map of tree trunks is represented as a set of sparse cylinder models. Our semantic place recognition module leverages this sparse representation to efficiently estimate the relative transformation between multiple robots, and merge their information to build a globally consistent large-scale map. This ultimately allows us to scale up operations with multiple robots. Our system is able to achieve a mean absolute error of 1.45 cm for diameter estimation and 13.2 cm for relative position estimation between a pair of robots after place recognition and map merging
科研通智能强力驱动
Strongly Powered by AbleSci AI