分割
人工智能
计算机科学
模态(人机交互)
情态动词
一致性(知识库)
模式识别(心理学)
相似性(几何)
特征(语言学)
利用
机器学习
计算机视觉
图像(数学)
化学
哲学
计算机安全
高分子化学
语言学
作者
Xiaoyu Chen,Hong-Yu Zhou,Feng Liu,Jiansen Guo,Liansheng Wang,Yizhou Yu
标识
DOI:10.1016/j.media.2022.102506
摘要
Training deep segmentation models for medical images often requires a large amount of labeled data. To tackle this issue, semi-supervised segmentation has been employed to produce satisfactory delineation results with affordable labeling cost. However, traditional semi-supervised segmentation methods fail to exploit unpaired multi-modal data, which are widely adopted in today's clinical routine. In this paper, we address this point by proposing Modality-collAborative Semi-Supervised segmentation (i.e., MASS), which utilizes the modality-independent knowledge learned from unpaired CT and MRI scans. To exploit such knowledge, MASS uses cross-modal consistency to regularize deep segmentation models in aspects of both semantic and anatomical spaces, from which MASS learns intra- and inter-modal correspondences to warp atlases' labels for making predictions. For better capturing inter-modal correspondence, from a perspective of feature alignment, we propose a contrastive similarity loss to regularize the latent space of both modalities in order to learn generalized and robust modality-independent representations. Compared to semi-supervised and multi-modal segmentation counterparts, the proposed MASS brings nearly 6% improvements under extremely limited supervision.
科研通智能强力驱动
Strongly Powered by AbleSci AI