人工智能
计算机科学
模式识别(心理学)
相似性(几何)
峰值信噪比
图像质量
计算机视觉
磁共振成像
迭代重建
图像分辨率
图像(数学)
超分辨率
医学
放射科
作者
Jiale Wang,Alexander F. Heimann,Moritz Tannast,Guoyan Zheng
标识
DOI:10.1007/978-3-031-43907-0_48
摘要
Deep learning-based algorithms for single MR image (MRI) super-resolution have shown great potential in enhancing the resolution of low-quality images. However, many of these methods rely on supervised training with paired low-resolution (LR) and high-resolution (HR) MR images, which can be difficult to obtain in clinical settings. This is because acquiring HR MR images in clinical settings requires a significant amount of time. In contrast, HR CT images are acquired in clinical routine. In this paper, we propose a CT-guided, unsupervised MRI super-resolution reconstruction method based on joint cross-modality image translation and super-resolution reconstruction, eliminating the requirement of high-resolution MRI for training. The proposed approach is validated on two datasets respectively acquired from two different clinical sites. Well-established metrics including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Metrics (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS) are used to assess the performance of the proposed method. Our method achieved an average PSNR of 32.23, an average SSIM of 0.90 and an average LPIPS of 0.14 when evaluated on data of the first site. An average PSNR of 30.58, an average SSIM of 0.88, and an average LPIPS of 0.10 were achieved by our method when evaluated on data of the second site.
科研通智能强力驱动
Strongly Powered by AbleSci AI