人工智能
计算机科学
残余物
模式识别(心理学)
深度学习
卷积神经网络
图像融合
图像配准
编码器
计算机视觉
图像(数学)
算法
操作系统
作者
Xinyu Xie,Xiaozhi Zhang,Shengcheng Ye,Dongping Xiong,Lijun Ouyang,Bin Yang,Hong Zhou,Yaping Wan
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:72: 1-17
被引量:3
标识
DOI:10.1109/tim.2023.3317470
摘要
It is crucial to integrate the complementary information of multimodal medical images for enhancing the image quality in clinical diagnosis. Convolutional neural network (CNN) based deep learning methods have been widely utilized for image fusion due to their strong modeling ability. However, CNNs fail to build the long-range dependencies in an image, which limits the fusion performance. To address this issue, in this work, we develop a new unsupervised multimodal medical image fusion framework that combines the Swin Transformer and CNN. The proposed model follows a two-stage training strategy, where an auto-encoder is trained to extract multiple deep features and reconstruct fused images. And a novel residual Swin-Convolution fusion (RSCF) module is designed to fuse the multiscale features. Specifically, it consists of a global residual Swin Transformer branch for capturing the global contextual information, as well as a local gradient residual dense branch for capturing the local fine-grained information. To further effectively integrate more meaningful information and ensure the visual quality of fused images, we define a joint loss function including content loss and intensity loss to constrain the RSCF fusion module. Moreover, we introduce an adaptive weight block to assign learnable weights in the loss function, which can control the information preservation degree of source images. In such cases, abundant texture features from MRI images and appropriate intensity information from functional images can be well preserved simultaneously. Extensive comparisons have been conducted between the proposed model and other state-of-the-art fusion methods on CT-MRI, PET-MRI, and SPECT-MRI image fusion tasks. Both qualitative and quantitative comparisons have demonstrated the superiority of our model.
科研通智能强力驱动
Strongly Powered by AbleSci AI