计算机科学
分割
人工智能
监督学习
特征学习
机器学习
基本事实
模式识别(心理学)
光学(聚焦)
代表(政治)
半监督学习
特征(语言学)
人工神经网络
语言学
哲学
物理
政治
法学
政治学
光学
作者
Qianying Liu,Xiao Gu,Paul Henderson,Fani Deligianni
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:2
标识
DOI:10.48550/arxiv.2306.14293
摘要
Semi-supervised learning has demonstrated great potential in medical image segmentation by utilizing knowledge from unlabeled data. However, most existing approaches do not explicitly capture high-level semantic relations between distant regions, which limits their performance. In this paper, we focus on representation learning for semi-supervised learning, by developing a novel Multi-Scale Cross Supervised Contrastive Learning (MCSC) framework, to segment structures in medical images. We jointly train CNN and Transformer models, regularising their features to be semantically consistent across different scales. Our approach contrasts multi-scale features based on ground-truth and cross-predicted labels, in order to extract robust feature representations that reflect intra- and inter-slice relationships across the whole dataset. To tackle class imbalance, we take into account the prevalence of each class to guide contrastive learning and ensure that features adequately capture infrequent classes. Extensive experiments on two multi-structure medical segmentation datasets demonstrate the effectiveness of MCSC. It not only outperforms state-of-the-art semi-supervised methods by more than 3.0% in Dice, but also greatly reduces the performance gap with fully supervised methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI