计算机科学
编码(集合论)
人工智能
GSM演进的增强数据速率
源代码
图像(数学)
噪音(视频)
领域(数学)
特征提取
特征(语言学)
深度学习
计算机视觉
模式识别(心理学)
数据挖掘
作者
Jiawei Li,Jinyuan Liu,Shihua Zhou,Qiang Zhang,Jie Yang
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:: 1-1
标识
DOI:10.1109/tcsvt.2022.3202692
摘要
Nowadays, deep learning has made rapid progress in the field of multi-exposure image fusion. However, it is still challenging to extract available features while retaining texture details and color. To address this difficult issue, in this paper, we propose a coordinated learning network for detail-refinement in an end-to-end manner. Firstly, we obtain shallow feature maps from extreme over/under-exposed source images by a collaborative extraction module. Secondly, smooth attention weight maps are generated under the guidance of a self-attention module, which can draw a global connection to correlate patches in different locations. With the cooperation of the two aforementioned used modules, our proposed network can obtain a coarse fused image. Moreover, by assisting with an edge revision module, edge details of fused results are refined and noise is suppressed effectively. We conduct subjective qualitative and objective quantitative comparisons between the proposed method and twelve state-of-the-art methods on two available public datasets, respectively. The results show that our fused images significantly outperform others in visual effects and evaluation metrics. In addition, we also perform ablation experiments to verify the function and effectiveness of each module in our proposed method. The source code can be achieved at https://github.com/lok-18/LCNDR.
科研通智能强力驱动
Strongly Powered by AbleSci AI