单眼
计算机科学
人工智能
估计
计算机视觉
工程类
系统工程
作者
Ilia Indyk,Ilya Makarov
标识
DOI:10.1109/ismar59233.2023.00138
摘要
Depth estimation is crucial in various computer vision applications, including autonomous driving, robotics, and virtual and augmented reality. An accurate scene depth map is beneficial for localization, spatial registration, and tracking. It converts 2D images into precise 3D coordinates for accurate positioning, seamlessly aligns virtual and real objects in applications like AR, and enhances object tracking by distinguishing distances. The self-supervised monocular approach is particularly promising as it eliminates the need for complex and expensive data acquisition setups relying solely on a standard RGB camera. Recently, transformer-based architectures have become popular to solve this problem, but at high quality, they suffer from high computational cost and poor perception of small details as they focus more on global information. In this paper, we propose a novel fully convolutional network for monocular depth estimation, called MonoVAN, which incorporates the visual attention mechanism and applies super-resolution techniques in decoder to better capture fine-grained details in depth maps. To the best of our knowledge, this work pioneers the use of a convolutional visual attention in the context of depth estimation. Our experiments on outdoor KITTI benchmark and the indoor NYUv2 dataset show that our approach outperforms the most advanced self-supervised methods, including such state-of-the-art models as transformer-based VTDepth from ISMAR'22 and hybrid convolutional-transformer MonoFormer from AAAI'23, while having a comparable or even fewer number of parameters in our model than competitors. We also validate the impact of each proposed improvement in isolation, providing evidence of its significant contribution. Code and weights are available at https://github.com/IlyaInd/MonoVAN.
科研通智能强力驱动
Strongly Powered by AbleSci AI