弹丸
单发
公制(单位)
计算机视觉
人工智能
计算机科学
计算机图形学(图像)
一次性
光学
数学
物理
材料科学
工程类
机械工程
运营管理
冶金
作者
Blanca Lasheras-Hernandez,Klaus H. Strobl,Sergio Izquierdo,Tim Bodenmüller,Rudolph Triebel,Javier Civera
出处
期刊:Cornell University - arXiv
日期:2024-12-03
标识
DOI:10.48550/arxiv.2412.02386
摘要
Metric depth estimation from visual sensors is crucial for robots to perceive, navigate, and interact with their environment. Traditional range imaging setups, such as stereo or structured light cameras, face hassles including calibration, occlusions, and hardware demands, with accuracy limited by the baseline between cameras. Single- and multi-view monocular depth offers a more compact alternative, but is constrained by the unobservability of the metric scale. Light field imaging provides a promising solution for estimating metric depth by using a unique lens configuration through a single device. However, its application to single-view dense metric depth is under-addressed mainly due to the technology's high cost, the lack of public benchmarks, and proprietary geometrical models and software. Our work explores the potential of focused plenoptic cameras for dense metric depth. We propose a novel pipeline that predicts metric depth from a single plenoptic camera shot by first generating a sparse metric point cloud using machine learning, which is then used to scale and align a dense relative depth map regressed by a foundation depth model, resulting in dense metric depth. To validate it, we curated the Light Field & Stereo Image Dataset (LFS) of real-world light field images with stereo depth labels, filling a current gap in existing resources. Experimental results show that our pipeline produces accurate metric depth predictions, laying a solid groundwork for future research in this field.
科研通智能强力驱动
Strongly Powered by AbleSci AI