计算机科学
人工智能
代表(政治)
保险丝(电气)
推论
模式识别(心理学)
分辨率(逻辑)
超分辨率
计算机视觉
深度学习
图像(数学)
图像分辨率
比例(比率)
工程类
物理
电气工程
政治
法学
量子力学
政治学
作者
Jiale Wang,Runze Wang,Rong Tu,Guoyan Zheng
标识
DOI:10.1007/978-3-031-16446-0_43
摘要
Deep learning-based single image super resolution (SISR) algorithms have great potential to recover high-resolution (HR) images from low-resolution (LR) inputs. However, most studies require paired LR and HR images for a supervised training, which are difficult to organize in clinical applications. In this paper, we propose an unsupervised arbitrary scale super-resolution reconstruction (UASSR) method based on disentangled representation learning, eliminating the requirement of paired images for training. Applying our method to applications of generating HR images with high inter-plane resolution from LR images with low inter-plane resolution. At the inference stage, we design a strategy to fuse multiple reconstructed HR images from different views to achieve better super-resolution (SR) result. We conduct experiments on one publicly available dataset including 507 MR volumes of the knee joint and an in-house dataset containing 130 CT volumes of the lower spine. Results from our comprehensive experiments demonstrate superior performance of UASSR over other state-of-the-art methods. A reference implementation of our method can be found at: https://github.com/jialewang1/UASSR .
科研通智能强力驱动
Strongly Powered by AbleSci AI