计算机科学
人工智能
失真(音乐)
鉴别器
特征(语言学)
过程(计算)
GSM演进的增强数据速率
像素
计算机视觉
图像(数学)
编码(集合论)
图像融合
源代码
深度学习
模式识别(心理学)
哲学
语言学
操作系统
集合(抽象数据类型)
放大器
程序设计语言
带宽(计算)
探测器
电信
计算机网络
作者
Jinyuan Liu,Jingjie Shang,Risheng Liu,Xin Fan
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2022-08-01
卷期号:32 (8): 5026-5040
被引量:59
标识
DOI:10.1109/tcsvt.2022.3144455
摘要
Deep learning networks have recently demonstrated yielded impressive progress for multi-exposure image fusion. However, how to restore realistic texture details while correcting color distortion is still a challenging problem to be solved. To alleviate the aforementioned issues, in this paper, we propose an attention-guided global-local adversarial learning network for fusing extreme exposure images in a coarse-to-fine manner. Firstly, the coarse fusion result is generated under the guidance of attention weight maps, which acquires the essential region of interest from both sides. Secondly, we formulate an edge loss function, along with a spatial feature transform layer, for refining the fusion process. So that it can take full use of the edge information to deal with blurry edges. Moreover, by incorporating global-local learning, our method can balance pixel intensity distribution and correct the color distortion on spatially varying source images from both image/patch perspectives. Such a global-local discriminator ensures all the local patches of the fused images align with realistic normal-exposure ones. Extensive experimental results on two publicly available datasets show that our method drastically outperforms state-of-the-art methods in visual inspection and objective analysis. Furthermore, sufficient ablation experiments prove that our method has significant advantages in generating high-quality fused results with appealing details, clear targets, and faithful color. Source code will be available at https://github.com/JinyuanLiu-CV/AGAL .
科研通智能强力驱动
Strongly Powered by AbleSci AI