计算机科学
人工智能
分割
模态(人机交互)
杠杆(统计)
注释
模式
模式识别(心理学)
深度学习
图像分割
机器学习
社会科学
社会学
作者
Malo Alefsen,Eugene Vorontsov,Samuel Kadoury
标识
DOI:10.1007/978-3-031-43901-8_14
摘要
Automated medical image segmentation using deep neural networks typically requires substantial supervised training. However, these models fail to generalize well across different imaging modalities. This shortcoming, amplified by the limited availability of expert annotated data, has been hampering the deployment of such methods at a larger scale across modalities. To address these issues, we propose M-GenSeg, a new semi-supervised generative training strategy for cross-modality tumor segmentation on unpaired bi-modal datasets. With the addition of known healthy images, an unsupervised objective encourages the model to disentangling tumors from the background, which parallels the segmentation task. Then, by teaching the model to convert images across modalities, we leverage available pixel-level annotations from the source modality to enable segmentation in the unannotated target modality. We evaluated the performance on a brain tumor segmentation dataset composed of four different contrast sequences from the public BraTS 2020 challenge data. We report consistent improvement in Dice scores over state-of-the-art domain-adaptive baselines on the unannotated target modality. Unlike the prior art, M-GenSeg also introduces the ability to train with a partially annotated source modality.
科研通智能强力驱动
Strongly Powered by AbleSci AI