图像配准
人工智能
超分辨率
计算机视觉
计算机科学
医学影像学
变压器
图像分辨率
迭代重建
图像(数学)
模式识别(心理学)
工程类
电压
电气工程
作者
Runshi Zhang,Hao Mo,Junchen Wang,Bin B. Jie,Yang He,Nenghao Jin,Liang Zhu
出处
期刊:IEEE Transactions on Medical Imaging
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-1
标识
DOI:10.1109/tmi.2024.3467919
摘要
Complicated image registration is a key issue in medical image analysis, and deep learning-based methods have achieved better results than traditional methods. The methods include ConvNet-based and Transformer-based methods. Although ConvNets can effectively utilize local information to reduce redundancy via small neighborhood convolution, the limited receptive field results in the inability to capture global dependencies. Transformers can establish long-distance dependencies via a self-attention mechanism; however, the intense calculation of the relationships among all tokens leads to high redundancy. We propose a novel unsupervised image registration method named the unified Transformer and superresolution (UTSRMorph) network, which can enhance feature representation learning in the encoder and generate detailed displacement fields in the decoder to overcome these problems. We first propose a fusion attention block to integrate the advantages of ConvNets and Transformers, which inserts a ConvNet-based channel attention module into a multihead self-attention module. The overlapping attention block, a novel cross-attention method, uses overlapping windows to obtain abundant correlations with match information of a pair of images. Then, the blocks are flexibly stacked into a new powerful encoder. The decoder generation process of a high-resolution deformation displacement field from low-resolution features is considered as a superresolution process. Specifically, the superresolution module was employed to replace interpolation upsampling, which can overcome feature degradation. UTSRMorph was compared to state-of-the-art registration methods in the 3D brain MR (OASIS, IXI) and MR-CT datasets (abdomen, craniomaxillofacial). The qualitative and quantitative results indicate that UTSRMorph achieves relatively better performance. The code and datasets used are publicly available at https://github.com/Runshi-Zhang/UTSRMorph.
科研通智能强力驱动
Strongly Powered by AbleSci AI