计算机科学
人工智能
分割
卷积神经网络
模式识别(心理学)
判别式
地点
特征学习
图像分割
特征提取
生成对抗网络
深度学习
计算机视觉
哲学
语言学
作者
Yifan Zhang,Yonghui Wang,Lisheng Xu,Yudong Yao,Wei Qian,Lin Qi
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2023-11-30
卷期号:: 1-12
被引量:1
标识
DOI:10.1109/jbhi.2023.3336965
摘要
Unsupervised domain adaptation (UDA) methods have shown great potential in cross-modality medical image segmentation tasks, where target domain labels are unavailable. However, the domain shift among different image modalities remains challenging, because the conventional UDA methods are based on convolutional neural networks (CNNs), which tend to focus on the texture of images and cannot establish the global semantic relevance of features due to the locality of CNNs. This paper proposes a novel end-to-end Swin Transformer-based generative adversarial network (ST-GAN) for cross-modality cardiac segmentation. In the generator of ST-GAN, we utilize the local receptive fields of CNNs to capture spatial information and introduce the Swin Transformer to extract global semantic information, which enables the generator to better extract the domain-invariant features in UDA tasks. In addition, we design a multi-scale feature fuser to sufficiently fuse the features acquired at different stages and improve the robustness of the UDA network. We extensively evaluated our method with two cross-modality cardiac segmentation tasks on the MS-CMR 2019 dataset and the M&Ms dataset. The results of two different tasks show the validity of ST-GAN compared with the state-of-the-art cross-modality cardiac image segmentation methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI