计算机科学
模态(人机交互)
人工智能
图形
合并(版本控制)
模式识别(心理学)
分割
特征学习
计算机视觉
理论计算机科学
情报检索
作者
Jiawei Li,J.Z Chen,Jinyuan Liu,Huimin Ma
标识
DOI:10.1145/3581783.3612135
摘要
Infrared and visible image fusion has gradually proved to be a vital fork in the field of multi-modality imaging technologies. In recent developments, researchers not only focus on the quality of fused images but also evaluate their performance in downstream tasks. Nevertheless, the majority of methods seldom put their eyes on mutual learning from different modalities, resulting in fused images lacking significant details and textures. To overcome this issue, we propose an interactive graph neural network (GNN)-based architecture between cross modality for fusion, called IGNet. Specifically, we first apply a multi-scale extractor to achieve shallow features, which are employed as the necessary input to build graph structures. Then, the graph interaction module can construct the extracted intermediate features of the infrared/visible branch into graph structures. Meanwhile, the graph structures of two branches interact for cross-modality and semantic learning, so that fused images can maintain the important feature expressions and enhance the performance of downstream tasks. Besides, the proposed leader nodes can improve information propagation in the same modality. Finally, we merge all graph features to get the fusion result. Extensive experiments on different datasets (i.e. TNO, MFNet, and M3FD) demonstrate that our IGNet can generate visually appealing fused images while scoring averagely 2.59% [email protected] and 7.77% mIoU higher in detection and segmentation than the compared state-of-the-art methods. The source code of the proposed IGNet can be available at https://github.com/lok-18/IGNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI