人工智能
计算机科学
卷积神经网络
分割
变压器
图像分割
冗余(工程)
模式识别(心理学)
特征提取
电压
物理
量子力学
操作系统
作者
Xiayu Guo,Xian Lin,Xin Yang,Li Yu,Kwang‐Ting Cheng,Zengqiang Yan
标识
DOI:10.1016/j.patcog.2024.110491
摘要
Transformer, born for long-range dependency establishment, has been widely studied as a complementary of convolutional neural networks (CNNs) in medical image segmentation. However, existing CNN-Transformer hybrid approaches simply pursue implicit feature fusion without considering their underlying functional overlap. Medical images typically follow stable anatomical structures, making convolution capable of handling most segmentation targets. Without differentiation, enforcing transformers to operate self-attention for all image patches would result in severe redundancy, hindering global feature extraction. In this paper, we propose a simple yet effective hybrid network named UCTNet where transformers only focus on establishing global dependency for CNN's unreliable regions predicted through uncertainty estimation. In this way, CNN and transformer are explicitly fused to minimize functional overlap. More importantly, with fewer regions to handle, UCTNet is of better convergence to learn more robust feature representations for hard examples. Extensive experiments on publicly-available datasets demonstrate the superiority of UCTNet against the state-of-the-art approaches, achieving 89.44%, 92.91%, and 91.15% in Dice similarity coefficient on Synapse, ACDC, and ISIC2018 respectively. Furthermore, such a CNN-Transformer hybrid strategy is highly extendable to other frameworks without introducing additional computational burdens. Code is available at https://github.com/innocence0206/UCTNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI