计算机科学
人工智能
分割
模态(人机交互)
卷积神经网络
图像分割
模式识别(心理学)
医学影像学
计算机视觉
适应(眼睛)
深度学习
光学
物理
作者
Junlin Xian,Xiang Li,Dandan Tu,Senhua Zhu,Changzheng Zhang,Xiaowu Liu,Xin Li,Xin Yang
出处
期刊:IEEE Transactions on Medical Imaging
[Institute of Electrical and Electronics Engineers]
日期:2023-01-19
卷期号:42 (6): 1774-1785
被引量:11
标识
DOI:10.1109/tmi.2023.3238114
摘要
Deep convolutional neural networks (CNNs) have achieved impressive performance in medical image segmentation; however, their performance could degrade significantly when being deployed to unseen data with heterogeneous characteristics. Unsupervised domain adaptation (UDA) is a promising solution to tackle this problem. In this work, we present a novel UDA method, named dual adaptation-guiding network (DAG-Net), which incorporates two highly effective and complementary structural-oriented guidance in training to collaboratively adapt a segmentation model from a labelled source domain to an unlabeled target domain. Specifically, our DAG-Net consists of two core modules: 1) Fourier-based contrastive style augmentation (FCSA) which implicitly guides the segmentation network to focus on learning modality-insensitive and structural-relevant features, and 2) residual space alignment (RSA) which provides explicit guidance to enhance the geometric continuity of the prediction in the target modality based on a 3D prior of inter-slice correlation. We have extensively evaluated our method with cardiac substructure and abdominal multi-organ segmentation for bidirectional cross-modality adaptation between MRI and CT images. Experimental results on two different tasks demonstrate that our DAG-Net greatly outperforms the state-of-the-art UDA approaches for 3D medical image segmentation on unlabeled target images.
科研通智能强力驱动
Strongly Powered by AbleSci AI