瓶颈
信息瓶颈法
分辨率(逻辑)
图像(数学)
超分辨率
计算机科学
计算机视觉
人工智能
嵌入式系统
相互信息
作者
Chih–Chung Hsu,Chia-Ming Lee,Yi-Shiuan Chou
出处
期刊:Cornell University - arXiv
日期:2024-03-31
标识
DOI:10.48550/arxiv.2404.00722
摘要
In recent years, Vision Transformer-based applications to low-level vision tasks have achieved widespread success. Unlike CNN-based models, Transformers are more adept at capturing long-range dependencies, enabling the reconstruction of images utilizing information from non-local areas. In the domain of super-resolution, Swin-transformer-based approaches have become mainstream due to their capacity to capture global spatial information and their shifting-window attention mechanism that facilitates the interchange of information between different windows. Many researchers have enhanced image quality and network efficiency by expanding the receptive field or designing complex networks, yielding commendable results. However, we observed that spatial information tends to diminish during the forward propagation process due to increased depth, leading to a loss of spatial information and, consequently, limiting the model's potential. To address this, we propose the Dense-residual-connected Transformer (DRCT), aimed at mitigating the loss of spatial information through dense-residual connections between layers, thereby unleashing the model's potential and enhancing performance. Experiment results indicate that our approach is not only straightforward but also achieves remarkable efficiency, surpassing state-of-the-art methods and performing commendably at NTIRE2024.
科研通智能强力驱动
Strongly Powered by AbleSci AI