计算机科学
人工智能
单眼
基本事实
RGB颜色模型
计算机视觉
深度学习
摄影测量学
避障
深度图
图像(数学)
机器人
移动机器人
作者
Horatiu Florea,Vlad–Cristian Miclea,Sergiu Nedevschi
标识
DOI:10.1109/iccp53602.2021.9733671
摘要
Acquiring scene depth information remains a crucial step in most autonomous navigation applications, enabling advanced features such as obstacle avoidance and SLAM. In many situations, extracting this data from camera feeds is preferred to the alternative, active depth sensing hardware such as LiDARs. Like in many other fields, Deep Learning solutions for processing images and generating depth predictions have seen major improvements in recent years. In order to support further research of such techniques, we present a new dataset, WildUAV, consisting of high-resolution RGB imagery for which dense depth ground truth data has been generated based on 3D maps obtained through photogrammetry. Camera positioning information is also included, along with additional video sequences useful in self-supervised learning scenarios where ground truth data is not required. Unlike traditional, automotive datasets typically used for depth prediction tasks, ours is designed to support on-board applications for Unmanned Aerial Vehicles in unstructured, natural environments, which prove to be more challenging. We perform several experiments using supervised and self-supervised monocular depth estimation methods and discuss the results. Data links and additional details will be provided on the project's Github repository.
科研通智能强力驱动
Strongly Powered by AbleSci AI