RGB颜色模型
人工智能
计算机科学
融合
卷积神经网络
计算机视觉
融合机制
目标检测
特征(语言学)
突出
模式识别(心理学)
传感器融合
对象(语法)
特征提取
哲学
脂质双层融合
语言学
作者
Qinling Guo,Wujie Zhou,Jingsheng Lei,Lu Yu
标识
DOI:10.1109/lsp.2021.3102524
摘要
Salient object detection (SOD) based on convolutional neural networks has achieved remarkable success. However, further improving the detection performance on challenging scenes (e.g., low-light scenes) requires additional investigation. Thermal infrared imaging captures thermal radiation from the surface of objects. Thus, it is insensitive to lighting conditions and can provide uniform imaging of objects. Accordingly, we propose a two-stage fusion network (TSFNet) integrating RGB and thermal information for RGB-T SOD. For the first fusion stage, we propose a feature-wise fusion module that captures and aggregates united information and intersecting information in each local region of the RGB and thermal images, and then independent decoding is applied to the RGB and thermal features. For the second fusion stage, we propose a bilateral auxiliary fusion module that extracts auxiliary spatial features from the foreground and background of the thermal and RGB modalities. Finally, we use multiple supervision to further improve the SOD performance. Comprehensive experiments demonstrate that TSFNet outperforms 11 state-of-the-art models under various indicators on three RGB-T SOD datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI