计算机科学
分割
卷积神经网络
变压器
编码器
人工智能
图像分割
地点
模式识别(心理学)
计算机视觉
语言学
哲学
物理
量子力学
电压
操作系统
作者
Marjan Vatanpour,Javad Haddadnia
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:11: 125511-125518
被引量:1
标识
DOI:10.1109/access.2023.3330958
摘要
Segmenting brain tumors in MR modalities is an important step in treatment planning. Recently, the majority of methods rely on Fully Convolutional Neural Networks (FCNNs) that have acceptable results for this task. Among various networks, the U-shaped architecture known as U-Net, has gained enormous success in medical image segmentation. However, absence of long-range association and the locality of convolutional layers in FCNNs can create issues in tumor segmentation with different tumor sizes. Due to the success of Transformers in natural language processing (NLP) as a result of using self-attention mechanism to model global information, some studies designed different variations of vision based U-Shaped Transformers. So, to get the effectiveness of U-Net we proposed TransDoubleU-Net which consists of double U-shaped nets for 3D MR Modality segmentation of brain images based on dual scale Swin Transformer for the encoder part and dual level decoder based on CNN and Transformers for better localization of features. The model's core uses the shifted windows multi-head self-attention of Swin Transformer and skip connections to CNN based decoder. The outputs are evaluated on BraTS2019 and BraTS2020 datasets and showed promising results in segmentation.
科研通智能强力驱动
Strongly Powered by AbleSci AI