计算机科学
模态(人机交互)
域适应
模式识别(心理学)
一致性(知识库)
图像翻译
人工智能
边距(机器学习)
分割
特征(语言学)
翻译(生物学)
图像(数学)
图像分割
基于分割的对象分类
尺度空间分割
计算机视觉
机器学习
哲学
信使核糖核酸
基因
化学
分类器(UML)
生物化学
语言学
作者
Guodong Zeng,Till D. Lerch,Florian Schmaranzer,Guoyan Zheng,Jürgen Burger,Kate Gerber,Moritz Tannast,Klaus-Arno Siebenrock,Nicolas U. Gerber
标识
DOI:10.1007/978-3-030-87199-4_19
摘要
Unsupervised domain adaptation (UDA) for cross-modality medical image segmentation has shown great progress by domain-invariant feature learning or image appearance translation. Feature-level adaptation based methods learn good domain-invariant features in classification tasks but usually cannot detect domain shift at the pixel level and are not able to achieve good results in dense semantic segmentation tasks. Image appearance adaptation based methods translate images into different styles with good appearance, but semantic consistency is hard to maintain and results in poor cross-modality segmentation. In this paper, we propose intra- and cross-modality semantic consistency (ICMSC) for UDA and our key insight is that the segmentation of synthesised images in different styles should be consistent. Specifically, our model consists of an image translation module and a domain-specific segmentation module. The image translation module is a standard CycleGAN, while the segmentation module contains two domain-specific segmentation networks. The intra-modality semantic consistency (IMSC) forces the reconstructed image after a cycle to be segmented in the same way as the original input image, while the cross-modality semantic consistency (CMSC) encourages the synthesised images after translation to be segmented exactly the same as before translation. Comprehensive experiments on two different datasets (cardiac and hip) demonstrate that our proposed method outperforms other UDA state-of-the-art methods by a large margin.
科研通智能强力驱动
Strongly Powered by AbleSci AI