计算机科学
人工智能
联营
棱锥(几何)
特征学习
模式识别(心理学)
编码器
卷积神经网络
特征(语言学)
特征提取
背景(考古学)
比例(比率)
深度学习
计算机视觉
数学
操作系统
物理
哲学
几何学
古生物学
生物
量子力学
语言学
作者
Yu Liu,Zhigang Yang,Juan Cheng,Xun Chen
出处
期刊:IEEE Signal Processing Letters
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:30: 100-104
被引量:1
标识
DOI:10.1109/lsp.2023.3243767
摘要
In this letter, a deep learning (DL)-based multi-exposure image fusion (MEF) method via multi-scale and context-aware feature learning is proposed, aiming to overcome the defects of existing traditional and DL-based methods. The proposed network is based on an auto-encoder architecture. First, an encoder that combines the convolutional network and Transformer is designed to extract multi-scale features and capture the global contextual information. Then, a multi-scale feature interaction (MSFI) module is devised to enrich the scale diversity of extracted features using cross-scale fusion and Atrous spatial pyramid pooling (ASPP). Finally, a decoder with a nest connection architecture is introduced to reconstruct the fused image. Experimental results show that the proposed method outperforms several representative traditional and DL-based MEF methods in terms of both visual quality and objective assessment.
科研通智能强力驱动
Strongly Powered by AbleSci AI