模态(人机交互)
人工智能
一致性(知识库)
分割
计算机科学
模式识别(心理学)
计算机视觉
作者
Z. Li,Chen Huang,Shipeng Xie
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:73: 1-11
标识
DOI:10.1109/tim.2024.3400343
摘要
Multimodal Magnetic resonance imaging (MRI) provide wealthy complementary information for determining the anatomical structure and pathological features of tumors. However, due to the limitations of patient progression and imaging cost, complete acquisition of MRI in various modalities is often not possible in clinic. To achieve segmentation effect in non-dominant modalities that is achieved in dominant modalities, encouraged by potential value of intrinsic connections between different modalities and considering the difficulty of pixel-level annotation of medical images, we propose a win-win approach called multimodality-assisted semi-supervised segmentation network (M²S³-Net). The core of our proposed approach is a multi-nondominant modality-assisted semi-supervised training strategy, which extracts generic and robust modality features from a limited annotated image by learning implicit features between two nondominant modalities to achieve information complementarity. To accommodate the above framework, we further propose modality fusion module (MFM) and cross-modality-assisted skip-connection (CMA skip-connection), which adaptively aggregate modality-independent features in a learnable manner to enhance the representativeness of deep models. Experiments on the public dataset BraTS2019 show that for the segmentation of perineural edema, using two non-dominant modalities, the proposed method achieves up to 77.82% (10% labeling) and 78.33% (20% labeling) of the dice coefficients, which is over 10% improvement compared to segmentation networks based on Semi-Supervised Learning (SSL) that utilizes only a single non-dominant modality. To a single dominant modality, it is 6.14% and 6.35% respectively. Compared with other multimodal segmentation methods, our method achieves 83.55% in tumor core using only 20% labels, which is superior to previous methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI