增采样
计算机科学
人工智能
对比度(视觉)
计算机视觉
图像质量
变压器
图像分辨率
嵌入
模式识别(心理学)
图像(数学)
物理
量子力学
电压
作者
Beiji Zou,Zexin Ji,Chengzhang Zhu,Yulan Dai,Wensheng Zhang,Xiaoyan Kui
标识
DOI:10.1016/j.bspc.2022.104154
摘要
Magnetic resonance imaging can present the precise anatomic structure in clinical applications. Nevertheless, due to the limited scanning equipment cost, scanning time and so on, high-resolution knee MR images are difficult to obtain. So the super-resolution technique is developed to improve the image quality. Unfortunately, conventional CNN-based methods cannot explicitly learn the long-range dependencies in images and simply integrate the auxiliary contrast without considering the characteristics of medical images. To tackle this issue, our approach aims to adaptively capture and fuse the significant auxiliary information of the multi-contrast images to improve the knee magnetic resonance image quality. We propose a multi-scale deformable transformer network (MSDT) for multi-contrast knee magnetic resonance imaging super-resolution. First, we aggregate multi-scale patch embedding from the multi-contrast knee MR images to effectively preserve the local contextual details and global structure information. Then, the deformable transformer architecture is designed to learn the data-dependent sparse attention of the knee MR image, which can adaptively obtain the high-frequency foreground details according to the image content. The proposed method is evaluated on the fastMRI dataset under 2× and 4× enlargements. Our MSDT achieves higher PSNR of 31.98 and SSIM of 0.713 at 2× upsampling factor and PSNR of 30.38 and SSIM of 0.615 at 4× upsampling factor. Moreover, our method can generate clear tissue structures and fine details. The experimental results show superior performance in comparison to the state-of-the-art super-resolution methods. This indicates that the MSDT can effectively reconstruct the high-quality knee MR images.
科研通智能强力驱动
Strongly Powered by AbleSci AI