图像融合
人工智能
计算机科学
特征(语言学)
正电子发射断层摄影术
医学影像学
医学诊断
计算机视觉
单光子发射计算机断层摄影术
模式识别(心理学)
磁共振成像
图像质量
特征提取
图像(数学)
核医学
放射科
医学
哲学
语言学
作者
Jinyu Wen,Feiwei Qin,Jiao Du,Meie Fang,Xinhua Wei,C. L. Philip Chen,Ping Li
标识
DOI:10.1109/tmm.2023.3273924
摘要
Multimodal image fusion plays an essential role in medical image analysis and application, where computed tomography (CT), magnetic resonance (MR), single-photon emission computed tomography (SPECT), and positron emission tomography (PET) are commonly-used modalities, especially for brain disease diagnoses. Most existing fusion methods do not consider the characteristics of medical images, and they adopt similar strategies and assessment standards to natural image fusion. While distinctive medical semantic information (MS-Info) is hidden in different modalities, the ultimate clinical assessment of the fusion results is ignored. Our MsgFusion first builds a relationship between the key MS-Info of the MR/CT/PET/SPECT images and image features to guide the CNN feature extractions using two branches and the design of the image fusion framework. For MR images, we combine the spatial domain feature and frequency domain feature (SF) to develop one branch. For PET/SPECT/CT images, we integrate the gray color space feature and adapt the HSV color space feature (GV) to develop another branch. A classification-based hierarchical fusion strategy is also proposed to reconstruct the fusion images to persist and enhance the salient MS-Info reflecting anatomical structure and functional metabolism. Fusion experiments are carried out on many pairs of MR-PET/SPECT and MR-CT images. According to seven classical objective quality assessments and one new subjective clinical quality assessment from 30 clinical doctors, the fusion results of the proposed MsgFusion are superior to those of the existing representative methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI