计算机科学
分割
人工智能
图像分割
编码器
变压器
卷积神经网络
地点
基于分割的对象分类
尺度空间分割
计算机视觉
模式识别(心理学)
工程类
语言学
哲学
电压
电气工程
操作系统
作者
Jieneng Chen,Yongyi Lu,Qihang Yu,Xiangde Luo,Ehsan Adeli,Yan Wang,Le Lü,Alan Yuille,Yuyin Zhou
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:2135
标识
DOI:10.48550/arxiv.2102.04306
摘要
Medical image segmentation is an essential prerequisite for developing healthcare systems, especially for disease diagnosis and treatment planning. On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard and achieved tremendous success. However, due to the intrinsic locality of convolution operations, U-Net generally demonstrates limitations in explicitly modeling long-range dependency. Transformers, designed for sequence-to-sequence prediction, have emerged as alternative architectures with innate global self-attention mechanisms, but can result in limited localization abilities due to insufficient low-level details. In this paper, we propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation. On one hand, the Transformer encodes tokenized image patches from a convolution neural network (CNN) feature map as the input sequence for extracting global contexts. On the other hand, the decoder upsamples the encoded features which are then combined with the high-resolution CNN feature maps to enable precise localization. We argue that Transformers can serve as strong encoders for medical image segmentation tasks, with the combination of U-Net to enhance finer details by recovering localized spatial information. TransUNet achieves superior performances to various competing methods on different medical applications including multi-organ segmentation and cardiac segmentation. Code and models are available at https://github.com/Beckschen/TransUNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI