卷积神经网络
模态(人机交互)
计算机科学
人工智能
模式
图像融合
特征(语言学)
医学影像学
分割
深度学习
图像分割
模式识别(心理学)
计算机视觉
源代码
特征提取
图像(数学)
操作系统
社会科学
语言学
哲学
社会学
作者
Yü Liu,Yu Shi,Fuhao Mu,Juan Cheng,Chang Li,Xun Chen
标识
DOI:10.1109/tim.2022.3184360
摘要
Medical image fusion aims to integrate the complementary information captured by images of different modalities into a more informative composite image. However, current study on medical image fusion suffers from several drawbacks: 1) Existing methods are mostly designed for 2-D slice fusion, and they tend to lose spatial contextual information when fusing medical images with volumetric structure slice by slice individually. 2) The few existing 3-D medical image fusion methods fail in considering the characteristics of source modalities sufficiently, leading to the loss of important modality information. 3) Most existing works concentrate on pursuing good performance on visual perception and objective evaluation, while there is a severe lack of clinical problem-oriented study. In this paper, to address these issues, we propose a multimodal MRI volumetric data fusion method based on an end-to-end convolutional neural network (CNN). In our network, an attention-based multimodal feature fusion (MMFF) module is presented for more effective feature learning. In addition, a specific loss function that considers the characteristics of different MRI modalities is designed to preserve the modality information. Experimental results demonstrate that the proposed method can obtain more competitive results on both visual quality and objective assessment, when compared with some representative 3-D and 2-D medical image fusion methods. We further verify the significance of the proposed method for brain tumor segmentation by enriching the input modalities, and the results show that it is helpful to improve the segmentation accuracy. The source code of our fusion method is available at https://github.com/yuliu316316/3D-CNN-Fusion.
科研通智能强力驱动
Strongly Powered by AbleSci AI