MonoVAN: Visual Attention for Self-Supervised Monocular Depth Estimation

单眼 计算机科学 人工智能 估计 计算机视觉 工程类 系统工程
作者
Ilia Indyk,Ilya Makarov
标识
DOI:10.1109/ismar59233.2023.00138
摘要

Depth estimation is crucial in various computer vision applications, including autonomous driving, robotics, and virtual and augmented reality. An accurate scene depth map is beneficial for localization, spatial registration, and tracking. It converts 2D images into precise 3D coordinates for accurate positioning, seamlessly aligns virtual and real objects in applications like AR, and enhances object tracking by distinguishing distances. The self-supervised monocular approach is particularly promising as it eliminates the need for complex and expensive data acquisition setups relying solely on a standard RGB camera. Recently, transformer-based architectures have become popular to solve this problem, but at high quality, they suffer from high computational cost and poor perception of small details as they focus more on global information. In this paper, we propose a novel fully convolutional network for monocular depth estimation, called MonoVAN, which incorporates the visual attention mechanism and applies super-resolution techniques in decoder to better capture fine-grained details in depth maps. To the best of our knowledge, this work pioneers the use of a convolutional visual attention in the context of depth estimation. Our experiments on outdoor KITTI benchmark and the indoor NYUv2 dataset show that our approach outperforms the most advanced self-supervised methods, including such state-of-the-art models as transformer-based VTDepth from ISMAR'22 and hybrid convolutional-transformer MonoFormer from AAAI'23, while having a comparable or even fewer number of parameters in our model than competitors. We also validate the impact of each proposed improvement in isolation, providing evidence of its significant contribution. Code and weights are available at https://github.com/IlyaInd/MonoVAN.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI

祝大家在新的一年里科研腾飞
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
2秒前
baibai完成签到,获得积分10
3秒前
4秒前
直率的惜寒完成签到,获得积分10
4秒前
美美发布了新的文献求助10
7秒前
hl123456完成签到,获得积分10
8秒前
李健应助你好采纳,获得10
8秒前
南华发布了新的文献求助10
8秒前
田様应助Lucille采纳,获得10
11秒前
12秒前
13秒前
ceci_s完成签到 ,获得积分10
13秒前
13秒前
Nes关闭了Nes文献求助
14秒前
14秒前
15秒前
兮颜发布了新的文献求助10
16秒前
star完成签到,获得积分10
17秒前
李小政发布了新的文献求助10
17秒前
xx发布了新的文献求助10
18秒前
小艾发布了新的文献求助20
18秒前
自然的哈密瓜完成签到,获得积分10
19秒前
qq发布了新的文献求助10
20秒前
22秒前
唐画完成签到 ,获得积分10
23秒前
汉堡包应助泡芙旺仔采纳,获得10
27秒前
赘婿应助醉熏的涵菱采纳,获得10
30秒前
奈丝完成签到 ,获得积分10
31秒前
booshu完成签到,获得积分10
35秒前
科研通AI2S应助星辰采纳,获得10
36秒前
37秒前
榴莲信徒发布了新的文献求助10
37秒前
37秒前
Candice应助科研通管家采纳,获得10
38秒前
桐桐应助科研通管家采纳,获得10
38秒前
kid1912应助科研通管家采纳,获得10
38秒前
科研通AI2S应助科研通管家采纳,获得10
39秒前
ming应助科研通管家采纳,获得10
39秒前
Candice应助科研通管家采纳,获得10
39秒前
汉堡包应助科研通管家采纳,获得10
39秒前
高分求助中
Востребованный временем 2500
The Three Stars Each: The Astrolabes and Related Texts 1500
Very-high-order BVD Schemes Using β-variable THINC Method 990
Les Mantodea de Guyane 800
Mantids of the euro-mediterranean area 700
Field Guide to Insects of South Africa 660
Mantodea of the World: Species Catalog 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 细胞生物学 免疫学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3396332
求助须知:如何正确求助?哪些是违规求助? 3006111
关于积分的说明 8819687
捐赠科研通 2693194
什么是DOI,文献DOI怎么找? 1475162
科研通“疑难数据库(出版商)”最低求助积分说明 682393
邀请新用户注册赠送积分活动 675580