计算机科学
模态(人机交互)
人工智能
图像(数学)
分割
培训(气象学)
图像分割
模式识别(心理学)
计算机视觉
物理
气象学
作者
Liangce Qi,Weili Shi,Yu Miao,Yonghui Li,Guanyuan Feng,Zhengang Jiang
标识
DOI:10.1016/j.bspc.2024.106343
摘要
Despite the great success of deep neural networks in brain tumor segmentation, it is challenging to obtain sufficient annotated images due to the requirement of clinical expertise. Masked image modeling recently achieved competitive performance compared with supervised training by learning rich representations from unlabeled data. However, it is originally designed for vision transformers and its effectiveness has not been well-studied in the medical domain, usually for limited unlabeled data and small convolutional network scenarios. In this paper, we propose a self-supervised learning framework to pre-train U-Net for brain tumor segmentation. Our goal is to learn modality-specific and modality-invariant representations from multi-modality magnetic resonance images. This is motivated by the fact that different modalities indicate the same organs and tissues but have various appearances. To achieve this, we design a new pretext task that reconstructs the masked patches of each modality based on the partial observation of other modalities. We evaluate our method by transfer performance on BraTS 2020 dataset. The experimental results demonstrate our method outperforms other self-supervised learning methods and improves the performance of a strong fully supervised baseline. The source codes are available at https://github.com/mobiletomb/IS-MIM.
科研通智能强力驱动
Strongly Powered by AbleSci AI