Self-supervised multi-frame depth estimation with visual-inertial pose transformer and monocular guidance

单眼 计算机科学 人工智能 惯性参考系 计算机视觉 惯性测量装置 姿势 极线几何 特征(语言学) 图像(数学) 语言学 哲学 物理 量子力学
作者
Xiang Wang,Haonan Luo,Zihang Wang,Jin Zheng,Xiao Bai
出处
期刊:Information Fusion [Elsevier]
卷期号:108: 102363-102363 被引量:1
标识
DOI:10.1016/j.inffus.2024.102363
摘要

Self-supervised monocular depth estimation has been a popular topic since it does not need labor-intensive depth ground truth collection. However, the accuracy of monocular network is limited as it can only utilize context provided in the single image, ignoring the geometric clues resided in videos. Most recently, multi-frame depth networks are introduced to the self-supervised depth learning framework to ameliorate monocular depth, which explicitly encode the geometric information via pairwise cost volume construction. In this paper, we address two main issues that affect the cost volume construction and thus the multi-frame depth estimation. First, camera pose estimation, which determines the epipolar geometry in cost volume construction but has rarely been addressed, is enhanced with additional inertial modality. Complementary visual and inertial modality are fused adaptively to provide accurate camera pose with a novel visual-inertial fusion Transformer, in which self-attention takes effect in visual-inertial feature interaction and cross-attention is utilized for task feature decoding and pose regression. Second, the monocular depth prior, which contains contextual information about the scene, is introduced to the multi-frame cost volume aggregation at the feature level. A novel monocular guided cost volume excitation module is proposed to adaptively modulate cost volume features and address possible matching ambiguity. With the proposed modules, a self-supervised multi-frame depth estimation network is presented, consisting of a monocular depth branch as prior, a camera pose branch integrating both visual and inertial modality, and a multi-frame depth branch producing the final depth with the aid of former two branches. Experimental results on the KITTI dataset show that our proposed method achieves notable performance boost on multi-frame depth estimation over the state-of-the-art competitors. Compared with ManyDepth and MOVEDepth, our method relatively improves depth accuracy by 9.2% and 5.3% on the KITTI dataset.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
wxy完成签到,获得积分10
刚刚
1秒前
wang发布了新的文献求助10
2秒前
某某完成签到,获得积分10
2秒前
Jasper应助科研通管家采纳,获得10
2秒前
大模型应助科研通管家采纳,获得10
3秒前
我是老大应助科研通管家采纳,获得10
3秒前
十月完成签到,获得积分20
3秒前
科研通AI2S应助科研通管家采纳,获得10
3秒前
大方无心应助科研通管家采纳,获得10
3秒前
Ava应助科研通管家采纳,获得10
3秒前
爆米花应助科研通管家采纳,获得10
3秒前
科研通AI2S应助科研通管家采纳,获得10
3秒前
Ava应助科研通管家采纳,获得10
3秒前
Jasper应助科研通管家采纳,获得10
3秒前
3秒前
妮妮你完成签到 ,获得积分10
5秒前
kiki发布了新的文献求助10
5秒前
5秒前
5秒前
6秒前
隐形元绿发布了新的文献求助10
6秒前
7秒前
lx33101128发布了新的文献求助10
8秒前
8秒前
Jessica完成签到,获得积分10
9秒前
9秒前
9秒前
靬七完成签到,获得积分10
9秒前
1122完成签到,获得积分10
10秒前
11秒前
11秒前
11秒前
罗又柔应助隐形元绿采纳,获得20
12秒前
12秒前
完美世界应助隐形元绿采纳,获得10
12秒前
12秒前
Dobronx03发布了新的文献求助10
12秒前
12秒前
可萨利亚关注了科研通微信公众号
13秒前
高分求助中
Evolution 10000
Sustainability in Tides Chemistry 2800
The Young builders of New china : the visit of the delegation of the WFDY to the Chinese People's Republic 1000
юрские динозавры восточного забайкалья 800
A new approach of magnetic circular dichroism to the electronic state analysis of intact photosynthetic pigments 500
Diagnostic immunohistochemistry : theranostic and genomic applications 6th Edition 500
Chen Hansheng: China’s Last Romantic Revolutionary 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3148940
求助须知:如何正确求助?哪些是违规求助? 2800005
关于积分的说明 7837927
捐赠科研通 2457512
什么是DOI,文献DOI怎么找? 1307891
科研通“疑难数据库(出版商)”最低求助积分说明 628322
版权声明 601685