PPformer: Using pixel-wise and patch-wise cross-attention for low-light image enhancement

计算机科学 人工智能 像素 模式识别(心理学) 计算机视觉
作者
Jiachen Dang,Yong Zhong,Xiaolin Qin
出处
期刊:Computer Vision and Image Understanding [Elsevier]
卷期号:: 103930-103930 被引量:4
标识
DOI:10.1016/j.cviu.2024.103930
摘要

Recently, transformer-based methods have shown strong competition compared to CNN-based methods on the low-light image enhancement task, by employing the self-attention for feature extraction. Transformer-based methods perform well in modeling long-range pixel dependencies, which are essential for low-light image enhancement to achieve better lighting, natural colors, and higher contrast. However, the high computational cost of self-attention limits its development in low-light image enhancement, while some works struggle to balance accuracy and computational cost. In this work, we propose a lightweight and effective network based on the proposed pixel-wise and patch-wise cross-attention mechanism, PPformer, for low-light image enhancement. PPformer is a CNN-transformer hybrid network that is divided into three parts: local-branch, global-branch, and Dual Cross-Attention. Each part plays a vital role in PPformer. Specifically, the local-branch extracts local structural information using a stack of Wide Enhancement Modules, and the global-branch provides the refining global information by Cross Patch Module and Global Convolution Module. Besides, different from self-attention, we use extracted global semantic information to guide modeling dependencies between local and non-local. According to calculating Dual Cross-Attention, the PPformer can effectively restore images with better color consistency, natural brightness and contrast. Benefiting from the proposed dual cross-attention mechanism, PPformer effectively captures the dependencies in both pixel and patch levels for a full-size feature map. Extensive experiments on eleven real-world benchmark datasets show that PPformer achieves better quantitative and qualitative results than previous state-of-the-art methods.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
2秒前
2秒前
Akim应助伯赏雁蓉采纳,获得10
3秒前
苏梨子完成签到,获得积分10
3秒前
3秒前
Monica完成签到,获得积分10
4秒前
4秒前
华仔应助卡拉蹦蹦采纳,获得10
4秒前
共享精神应助糕糕高高采纳,获得20
5秒前
nnnnnn发布了新的文献求助10
6秒前
热心的苡发布了新的文献求助10
6秒前
轻松海云完成签到,获得积分10
8秒前
9秒前
小胡发布了新的文献求助10
10秒前
Blue_Pig发布了新的文献求助20
11秒前
呆瓜完成签到,获得积分10
12秒前
Jasper应助angki77采纳,获得10
12秒前
ql完成签到,获得积分10
14秒前
15秒前
完美世界应助小柒采纳,获得10
15秒前
13est_J完成签到,获得积分20
16秒前
几两完成签到 ,获得积分10
16秒前
nnnnnn完成签到,获得积分10
16秒前
17秒前
老王完成签到,获得积分10
18秒前
18秒前
Mercuryyy完成签到,获得积分10
18秒前
热心的苡完成签到,获得积分20
19秒前
英俊的铭应助huahua诀绝子采纳,获得10
20秒前
mimi关注了科研通微信公众号
20秒前
13est_J发布了新的文献求助10
20秒前
王金娥发布了新的文献求助10
21秒前
曾阿牛发布了新的文献求助10
21秒前
八点必起发布了新的文献求助10
21秒前
22秒前
avocadoQ完成签到 ,获得积分10
23秒前
25秒前
ChaoZhang发布了新的文献求助10
27秒前
天天快乐应助谦让夜香采纳,获得10
29秒前
高分求助中
The late Devonian Standard Conodont Zonation 2000
Nickel superalloy market size, share, growth, trends, and forecast 2023-2030 2000
The Lali Section: An Excellent Reference Section for Upper - Devonian in South China 1500
Very-high-order BVD Schemes Using β-variable THINC Method 890
Mantiden: Faszinierende Lauerjäger Faszinierende Lauerjäger 800
PraxisRatgeber: Mantiden: Faszinierende Lauerjäger 800
Fundamentals of Dispersed Multiphase Flows 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3258135
求助须知:如何正确求助?哪些是违规求助? 2899933
关于积分的说明 8308256
捐赠科研通 2569175
什么是DOI,文献DOI怎么找? 1395555
科研通“疑难数据库(出版商)”最低求助积分说明 653117
邀请新用户注册赠送积分活动 630990