已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

PPformer: Using pixel-wise and patch-wise cross-attention for low-light image enhancement

计算机科学 人工智能 像素 模式识别(心理学) 计算机视觉
作者
Jiachen Dang,Yong Zhong,Xiaolin Qin
出处
期刊:Computer Vision and Image Understanding [Elsevier BV]
卷期号:241: 103930-103930 被引量:17
标识
DOI:10.1016/j.cviu.2024.103930
摘要

Recently, transformer-based methods have shown strong competition compared to CNN-based methods on the low-light image enhancement task, by employing the self-attention for feature extraction. Transformer-based methods perform well in modeling long-range pixel dependencies, which are essential for low-light image enhancement to achieve better lighting, natural colors, and higher contrast. However, the high computational cost of self-attention limits its development in low-light image enhancement, while some works struggle to balance accuracy and computational cost. In this work, we propose a lightweight and effective network based on the proposed pixel-wise and patch-wise cross-attention mechanism, PPformer, for low-light image enhancement. PPformer is a CNN-transformer hybrid network that is divided into three parts: local-branch, global-branch, and Dual Cross-Attention. Each part plays a vital role in PPformer. Specifically, the local-branch extracts local structural information using a stack of Wide Enhancement Modules, and the global-branch provides the refining global information by Cross Patch Module and Global Convolution Module. Besides, different from self-attention, we use extracted global semantic information to guide modeling dependencies between local and non-local. According to calculating Dual Cross-Attention, the PPformer can effectively restore images with better color consistency, natural brightness and contrast. Benefiting from the proposed dual cross-attention mechanism, PPformer effectively captures the dependencies in both pixel and patch levels for a full-size feature map. Extensive experiments on eleven real-world benchmark datasets show that PPformer achieves better quantitative and qualitative results than previous state-of-the-art methods.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
小明应助PPD采纳,获得10
1秒前
5秒前
求知者1701应助诸星大采纳,获得50
8秒前
ccc完成签到,获得积分10
9秒前
eterofpar完成签到,获得积分10
15秒前
南桥枝完成签到 ,获得积分10
18秒前
han关闭了han文献求助
19秒前
谦让碧菡完成签到,获得积分10
20秒前
Yina完成签到 ,获得积分10
21秒前
fengquan完成签到 ,获得积分10
21秒前
22秒前
旺仔先生完成签到 ,获得积分10
23秒前
ding应助机灵天亦采纳,获得10
27秒前
28秒前
煜清清完成签到 ,获得积分10
29秒前
31秒前
啦啦啦完成签到,获得积分10
31秒前
32秒前
33秒前
fat完成签到,获得积分10
34秒前
夏尔完成签到,获得积分10
34秒前
ccc发布了新的文献求助10
36秒前
石烟祝完成签到,获得积分10
36秒前
mmmio发布了新的文献求助10
36秒前
36秒前
36秒前
量子星尘发布了新的文献求助10
36秒前
康康完成签到 ,获得积分10
39秒前
夏尔发布了新的文献求助10
40秒前
41秒前
43秒前
肖易应助xiaolong采纳,获得10
43秒前
汉堡包应助车鹭洋采纳,获得10
43秒前
黄毛虎完成签到 ,获得积分0
44秒前
FashionBoy应助有钱采纳,获得10
46秒前
darqin完成签到 ,获得积分10
46秒前
端庄的如花完成签到,获得积分10
46秒前
脑洞疼应助科研通管家采纳,获得10
48秒前
英俊的铭应助科研通管家采纳,获得30
48秒前
NexusExplorer应助科研通管家采纳,获得10
48秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
网络安全 SEMI 标准 ( SEMI E187, SEMI E188 and SEMI E191.) 1000
Inherited Metabolic Disease in Adults: A Clinical Guide 500
计划经济时代的工厂管理与工人状况(1949-1966)——以郑州市国营工厂为例 500
INQUIRY-BASED PEDAGOGY TO SUPPORT STEM LEARNING AND 21ST CENTURY SKILLS: PREPARING NEW TEACHERS TO IMPLEMENT PROJECT AND PROBLEM-BASED LEARNING 500
The Pedagogical Leadership in the Early Years (PLEY) Quality Rating Scale 410
Why America Can't Retrench (And How it Might) 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 催化作用 遗传学 冶金 电极 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 4610291
求助须知:如何正确求助?哪些是违规求助? 4016305
关于积分的说明 12434932
捐赠科研通 3697878
什么是DOI,文献DOI怎么找? 2039077
邀请新用户注册赠送积分活动 1071968
科研通“疑难数据库(出版商)”最低求助积分说明 955614