Unified Information Fusion Network for Multi-Modal RGB-D and RGB-T Salient Object Detection

RGB颜色模型 计算机科学 人工智能 情态动词 特征(语言学) 水准点(测量) 模式识别(心理学) 模态(人机交互) 保险丝(电气) 计算机视觉 工程类 大地测量学 哲学 化学 电气工程 高分子化学 地理 语言学
作者
Wei Gao,Guibiao Liao,Siwei Ma,Ge Li,Yongsheng Liang,Weisi Lin
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology [Institute of Electrical and Electronics Engineers]
卷期号:32 (4): 2091-2106 被引量:134
标识
DOI:10.1109/tcsvt.2021.3082939
摘要

The use of complementary information, namely depth or thermal information, has shown its benefits to salient object detection (SOD) during recent years. However, the RGB-D or RGB-T SOD problems are currently only solved independently, and most of them directly extract and fuse raw features from backbones. Such methods can be easily restricted by low-quality modality data and redundant cross-modal features. In this work, a unified end-to-end framework is designed to simultaneously analyze RGB-D and RGB-T SOD tasks. Specifically, to effectively tackle multi-modal features, we propose a novel multi-stage and multi-scale fusion network (MMNet), which consists of a cross-modal multi-stage fusion module (CMFM) and a bi-directional multi-scale decoder (BMD). Similar to the visual color stage doctrine in the human visual system (HVS), the proposed CMFM aims to explore important feature representations in feature response stage, and integrate them into cross-modal features in adversarial combination stage. Moreover, the proposed BMD learns the combination of multi-level cross-modal fused features to capture both local and global information of salient objects, and can further boost the multi-modal SOD performance. The proposed unified cross-modality feature analysis framework based on two-stage and multi-scale information fusion can be used for diverse multi-modal SOD tasks. Comprehensive experiments ( $\sim 92\text{K}$ image-pairs) demonstrate that the proposed method consistently outperforms the other 21 state-of-the-art methods on nine benchmark datasets. This validates that our proposed method can work well on diverse multi-modal SOD tasks with good generalization and robustness, and provides a good multi-modal SOD benchmark.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
清脆的如凡完成签到 ,获得积分10
1秒前
kdjm688发布了新的文献求助10
1秒前
1秒前
科研通AI5应助开放的大侠采纳,获得10
2秒前
2秒前
2秒前
都能看出你打开完成签到,获得积分20
3秒前
感性的双双完成签到,获得积分20
3秒前
打打应助Kevin Li采纳,获得10
4秒前
Crrr发布了新的文献求助20
6秒前
linllll发布了新的文献求助10
6秒前
科研通AI5应助jiwoong采纳,获得10
7秒前
天天快乐应助彳亍君采纳,获得30
7秒前
8秒前
田様应助Amb1tionG采纳,获得10
10秒前
11秒前
江湖郎中发布了新的文献求助10
12秒前
hyt完成签到,获得积分20
13秒前
14秒前
科研通AI5应助TJY采纳,获得10
14秒前
15秒前
善学以致用应助平淡雅霜采纳,获得10
16秒前
xin完成签到 ,获得积分10
16秒前
17秒前
共享精神应助橙子fy16_采纳,获得50
17秒前
anson关注了科研通微信公众号
18秒前
科研通AI5应助hyt采纳,获得10
18秒前
19秒前
wanci应助qq采纳,获得10
19秒前
zhengzhster应助浮世天堂采纳,获得200
19秒前
曹文鹏发布了新的文献求助10
19秒前
虚拟的怀绿完成签到,获得积分10
19秒前
林轻舟完成签到,获得积分10
19秒前
H_C完成签到,获得积分20
20秒前
小林太郎应助YangXiang采纳,获得50
20秒前
素笺生花发布了新的文献求助10
21秒前
549sysfzr发布了新的文献求助10
21秒前
22秒前
汉堡包应助xin采纳,获得10
22秒前
Jasper应助sun采纳,获得10
22秒前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Mechanistic Modeling of Gas-Liquid Two-Phase Flow in Pipes 2500
Kelsen’s Legacy: Legal Normativity, International Law and Democracy 1000
Conference Record, IAS Annual Meeting 1977 610
Interest Rate Modeling. Volume 3: Products and Risk Management 600
Interest Rate Modeling. Volume 2: Term Structure Models 600
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3544136
求助须知:如何正确求助?哪些是违规求助? 3121336
关于积分的说明 9346650
捐赠科研通 2819436
什么是DOI,文献DOI怎么找? 1550205
邀请新用户注册赠送积分活动 722406
科研通“疑难数据库(出版商)”最低求助积分说明 713239