PPformer: Using pixel-wise and patch-wise cross-attention for low-light image enhancement

计算机科学 人工智能 像素 模式识别(心理学) 计算机视觉
作者
Jiachen Dang,Yong Zhong,Xiaolin Qin
出处
期刊:Computer Vision and Image Understanding [Elsevier]
卷期号:: 103930-103930 被引量:4
标识
DOI:10.1016/j.cviu.2024.103930
摘要

Recently, transformer-based methods have shown strong competition compared to CNN-based methods on the low-light image enhancement task, by employing the self-attention for feature extraction. Transformer-based methods perform well in modeling long-range pixel dependencies, which are essential for low-light image enhancement to achieve better lighting, natural colors, and higher contrast. However, the high computational cost of self-attention limits its development in low-light image enhancement, while some works struggle to balance accuracy and computational cost. In this work, we propose a lightweight and effective network based on the proposed pixel-wise and patch-wise cross-attention mechanism, PPformer, for low-light image enhancement. PPformer is a CNN-transformer hybrid network that is divided into three parts: local-branch, global-branch, and Dual Cross-Attention. Each part plays a vital role in PPformer. Specifically, the local-branch extracts local structural information using a stack of Wide Enhancement Modules, and the global-branch provides the refining global information by Cross Patch Module and Global Convolution Module. Besides, different from self-attention, we use extracted global semantic information to guide modeling dependencies between local and non-local. According to calculating Dual Cross-Attention, the PPformer can effectively restore images with better color consistency, natural brightness and contrast. Benefiting from the proposed dual cross-attention mechanism, PPformer effectively captures the dependencies in both pixel and patch levels for a full-size feature map. Extensive experiments on eleven real-world benchmark datasets show that PPformer achieves better quantitative and qualitative results than previous state-of-the-art methods.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
活力山蝶应助小白采纳,获得10
3秒前
xg完成签到,获得积分10
3秒前
Zezezee发布了新的文献求助10
3秒前
笑点低可乐完成签到,获得积分10
4秒前
4秒前
坚强的樱发布了新的文献求助10
4秒前
4秒前
求解限发布了新的文献求助160
4秒前
5秒前
白宝宝北北白应助XIN采纳,获得10
5秒前
wenjian发布了新的文献求助10
5秒前
6秒前
华仔应助jy采纳,获得10
6秒前
hoongyan完成签到 ,获得积分10
6秒前
Ava应助aoxiangcaizi12采纳,获得10
8秒前
Amai完成签到,获得积分10
8秒前
9秒前
九川发布了新的文献求助10
10秒前
风的季节发布了新的文献求助10
10秒前
可耐的乐荷完成签到,获得积分10
11秒前
WEILAI完成签到,获得积分10
11秒前
my发布了新的文献求助10
11秒前
wenjian完成签到,获得积分10
12秒前
12秒前
Accept2024完成签到,获得积分10
13秒前
万能图书馆应助笑笑采纳,获得10
13秒前
伊丽莎白居易完成签到,获得积分10
14秒前
鳗鱼静珊发布了新的文献求助10
14秒前
yuyiyi完成签到,获得积分10
15秒前
无花果应助胖豆采纳,获得10
16秒前
通~发布了新的文献求助10
16秒前
cc发布了新的文献求助10
17秒前
18秒前
MILL发布了新的文献求助10
18秒前
月光入梦完成签到 ,获得积分10
19秒前
HC完成签到,获得积分10
20秒前
琪琪发布了新的文献求助10
20秒前
21秒前
淡定的思松应助风的季节采纳,获得10
22秒前
所所应助mm采纳,获得10
22秒前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Social media impact on athlete mental health: #RealityCheck 1020
Ensartinib (Ensacove) for Non-Small Cell Lung Cancer 1000
Unseen Mendieta: The Unpublished Works of Ana Mendieta 1000
Bacterial collagenases and their clinical applications 800
El viaje de una vida: Memorias de María Lecea 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3527884
求助须知:如何正确求助?哪些是违规求助? 3108006
关于积分的说明 9287444
捐赠科研通 2805757
什么是DOI,文献DOI怎么找? 1540033
邀请新用户注册赠送积分活动 716904
科研通“疑难数据库(出版商)”最低求助积分说明 709794