计算机科学
卷积(计算机科学)
特征(语言学)
图像分辨率
像素
人工智能
算法
图像(数学)
卷积神经网络
一般化
计算复杂性理论
分辨率(逻辑)
模式识别(心理学)
领域(数学)
数学
人工神经网络
数学分析
哲学
语言学
纯数学
作者
Zhendong Zhang,Xinran Wang,Cheolkon Jung
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2018-10-22
卷期号:28 (4): 1625-1635
被引量:157
标识
DOI:10.1109/tip.2018.2877483
摘要
Dilated convolutions support expanding receptive field without parameter exploration or resolution loss, which turn out to be suitable for pixel-level prediction problems. In this paper, we propose multiscale single image super-resolution (SR) based on dilated convolutions. We adopt dilated convolutions to expand the receptive field size without incurring additional computational complexity. We mix standard convolutions and dilated convolutions in each layer, called mixed convolutions, i.e., in the mixed convolutional layer, and the feature extracted by dilated convolutions and standard convolutions are concatenated. We theoretically analyze the receptive field and intensity of mixed convolutions to discover their role in SR. Mixed convolutions remove blind spots and capture the correlation between low-resolution (LR) and high-resolution (HR) image pairs successfully, thus achieving good generalization ability. We verify those properties of mixed convolutions by training 5-layer and 10-layer networks. We also train a 20-layer deep network to compare the performance of the proposed method with those of the state-of-the-art ones. Moreover, we jointly learn maps with different scales from a LR image to its HR one in a single network. Experimental results demonstrate that the proposed method outperforms the state-of-the-art ones in terms of PSNR and SSIM, especially for a large-scale factor.
科研通智能强力驱动
Strongly Powered by AbleSci AI