计算机科学
人工智能
小波
残余物
模式识别(心理学)
保险丝(电气)
变压器
特征提取
卷积神经网络
计算机视觉
算法
量子力学
电气工程
物理
工程类
电压
作者
Guangyuan Li,Jun Lv,Chengyan Wang,Qi Dou,Jing Qin
标识
DOI:10.1007/978-3-031-16446-0_44
摘要
Current multi-contrast MRI super-resolution (SR) methods often harness convolutional neural networks (CNNs) for feature extraction and fusion. However, existing models have some shortcomings that prohibit them from producing more satisfactory results. First, during the feature extraction, some high-frequency details in the images are lost, resulting in blurring boundaries in the reconstructed images, which may impede the following diagnosis and treatment. Second, the perceptual field of the convolution kernel is limited, making the networks difficult to capture long-range/non-local features. Third, most of these models are solely driven by training data, neglecting prior knowledge about the correlations among different contrasts, which, once well leveraged, will effectively enhance the performance with limited training data. In this paper, we propose a novel model to synergize wavelet transforms with a new cross-attention transformer to comprehensively tackle these challenges; we call it WavTrans. Specifically, we harness one-level wavelet transformation to obtain the detail and approximation coefficients in the reference contrast MR images (Ref). While the approximation coefficients are applied to compress the low-frequency global information, the detail coefficients are utilized to represent the high-frequency local structure and texture information. Then, we propose a new residual cross-attention swin transformer to extract and fuse extracted features to establish long-distance dependencies between features and maximize the restoration of high-frequency information in Tar. In addition, a multi-residual fusion module is designed to fuse the high-frequency information in the upsampled Tar and the original Ref to ensure the restoration of detailed information. Extensive experiments demonstrate that WavTrans outperforms the SOTA methods by a considerable margin with upsampling factors of 2-fold and 4-fold. Code will be available at https://github.com/XAIMI-Lab/WavTrans .
科研通智能强力驱动
Strongly Powered by AbleSci AI