计算机科学
RGB颜色模型
稳健性(进化)
编码器
人工智能
特征(语言学)
计算机视觉
目标检测
保险丝(电气)
模式识别(心理学)
工程类
生物化学
化学
语言学
哲学
电气工程
基因
操作系统
作者
Kechen Song,Han Wang,Ying Zhao,Liming Huang,Hongwen Dong,Yunhui Yan
标识
DOI:10.1016/j.jksuci.2023.101702
摘要
In recent years, bimodal salient object detection has developed rapidly. In view of the advanced performance of their robustness to extreme situations such as background similarity and illumination variation, researchers began to focus on RGB-Depth-Thermal salient object detection (RGB-D-T SOD). However, most existing bimodal methods usually need expensive computational costs to complete accurate prediction, and this situation is even more serious for three-modal methods, which undoubtedly limits their applicability. To solve this problem, we are the first to propose a lightweight multi-level feature difference fusion network (MFDF) for real-time RGB-D-T SOD. In view of the depth modality contains less useful information, we design an asymmetric three-stream encoder based on MobileNetV2. Due to the differences in semantics and details between high and low level features, using the same module without discrimination will lead to a large number of redundant parameters. On the contrary, in the coding stage, we introduce a cross-modal enhancement module (CME) and a cross-modal fusion module (CMF) to fuse low-level and high-level features respectively. In order to reduce redundant parameters, we design a low-level feature decoding module (LFD) and a multi-scale high-level feature fusion module (MHFF). A great deal of experiments proves that the proposed MFDF has more advantages than the 17 state-of-the-art methods. On the efficiency side, MFDF has a faster speed (124 FPS when the image size is 320 × 320) and much fewer parameters (8.9 M).
科研通智能强力驱动
Strongly Powered by AbleSci AI