计算机科学
利用
变压器
分割
人工智能
卷积神经网络
图像分割
编码
上下文模型
串联(数学)
模式识别(心理学)
机器学习
工程类
数学
生物化学
化学
计算机安全
电气工程
电压
组合数学
对象(语法)
基因
作者
Hong-Yu Zhou,Jiansen Guo,Yinghao Zhang,Xiaoguang Han,Lequan Yu,Liansheng Wang,Yizhou Yu
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:32: 4036-4045
被引量:112
标识
DOI:10.1109/tip.2023.3293771
摘要
Transformer, the model of choice for natural language processing, has drawn scant attention from the medical imaging community. Given the ability to exploit long-term dependencies, transformers are promising to help atypical convolutional neural networks to learn more contextualized visual representations. However, most of recently proposed transformer-based segmentation approaches simply treated transformers as assisted modules to help encode global context into convolutional representations. To address this issue, we introduce nnFormer (i.e., not-another transFormer), a 3D transformer for volumetric medical image segmentation. nnFormer not only exploits the combination of interleaved convolution and self-attention operations, but also introduces local and global volume-based self-attention mechanism to learn volume representations. Moreover, nnFormer proposes to use skip attention to replace the traditional concatenation/summation operations in skip connections in U-Net like architecture. Experiments show that nnFormer significantly outperforms previous transformer-based counterparts by large margins on three public datasets. Compared to nnUNet, the most widely recognized convnet-based 3D medical segmentation model, nnFormer produces significantly lower HD95 and is much more computationally efficient. Furthermore, we show that nnFormer and nnUNet are highly complementary to each other in model ensembling. Codes and models of nnFormer are available at https://git.io/JSf3i.
科研通智能强力驱动
Strongly Powered by AbleSci AI