编码器
分割
计算机科学
人工智能
图像分割
深度学习
瓶颈
模式识别(心理学)
计算机视觉
嵌入式系统
操作系统
作者
W. Zhang,Shanxiong Chen,Yuqi Ma,Yu Liu,Xu Cao
标识
DOI:10.1016/j.compbiomed.2024.108005
摘要
Medical image segmentation is a crucial topic in medical image processing. Accurately segmenting brain tumor regions from multimodal MRI scans is essential for clinical diagnosis and survival prediction. However, similar intensity distributions, variable tumor shapes, and fuzzy boundaries pose severe challenges for brain tumor segmentation. Traditional segmentation networks based on UNet struggle to establish explicit long-range dependencies from the feature space due to the limitations of the CNN receptive field. This is particularly crucial for dense prediction tasks such as brain tumor segmentation. Recent works have incorporated the powerful global modeling capability of Transformer into UNet to achieve more precise segmentation results. Nevertheless, these methods encounter some issues: (1) the global information is often modeled by simply stacking Transformer layers for a specific module, resulting in high computational complexity and underutilization of the potential of the UNet architecture; (2) the rich boundary information of tumor subregions in multi-scale features is often overlooked. Motivated by these challenges, we propose an advanced fusion of Transformer with UNet by reexamining the core three parts (encoder, bottleneck, and skip connections). Firstly, we introduce a CNN-Transformer module in the encoder to replace the traditional CNN module, enabling the capture of deep spatial dependencies from input images. To address high-level semantic information, we incorporate a computationally efficient spatial-channel attention layer in the bottleneck for global interaction, highlighting important semantic features from the encoder path output. For irregular lesions, we fuse the multi-scale features from the encoder output and the decoder features in the skip connections by calculating cross-attention. This adaptive querying of valuable information from multi-scale features enhances the boundary localization ability of the decoder path and suppresses redundant features with low correlation. Compared to existing methods, our model further enhances the learning capacity of the overall UNet architecture while maintaining low computational complexity. Experimental results on the BraTS2018 and BraTS2020 datasets for brain tumor segmentation tasks demonstrate that our model achieves comparable or superior results compared to recent CNN or Transformer-based models. The average DSC and HD95 on the two datasets are 0.854, 6.688, and 0.862, 5.455 respectively. At the same time, our model achieves optimal segmentation of Enhancing tumors, showcasing the effectiveness of our method. Our code will be made publicly available at https://github.com/wzhangck/ETUnet.
科研通智能强力驱动
Strongly Powered by AbleSci AI