计算机科学
核(代数)
人工智能
卷积(计算机科学)
特征提取
失败
水准点(测量)
模式识别(心理学)
图像分辨率
特征(语言学)
计算机视觉
人工神经网络
数学
并行计算
哲学
大地测量学
组合数学
地理
语言学
作者
Hao Feng,Liejun Wang,Yongming Li,Anyu Du
标识
DOI:10.1016/j.knosys.2022.109376
摘要
Image super-resolution, aims to recover a corresponding high-resolution image from a given low-resolution image. While most state-of-the-art methods only consider using fixed small-size convolution kernels (e.g., 1 × 1, 3 × 3) to extract image features, few works have been made to large-size convolution kernels for image super-resolution (SR). In this paper, we propose a novel lightweight baseline model LKASR based on large kernel attention (LKA). LKASR consists of three parts, shallow feature extraction, deep feature extraction and high-quality image reconstruction. In particular, the deep feature extraction module consists of multiple cascaded visual attention modules (VAM), each of which consists of a 1 × 1 convolution, a large kernel attention (acts as Transformer) and a feature refinement module (FRM, acts as CNN). Specifically, VAM applies lightweight architecture like swin transformer to realize iterative extraction of global and local features of images, which greatly improves the effectiveness of SR method (0.049s in Urban100 dataset). For different scales ( × 2, × 3, × 4), extensive experimental results on benchmark demonstrate that LKASR outperforms most lightweight SR methods by up to 0.17∼0.34 dB, while the total of parameters and FLOPs remains lightweight.
科研通智能强力驱动
Strongly Powered by AbleSci AI