计算机科学
人工智能
图像融合
保险丝(电气)
传输(电信)
计算机视觉
编码器
融合
残余物
融合规则
卷积(计算机科学)
模式识别(心理学)
图像(数学)
人工神经网络
工程类
算法
电信
语言学
哲学
电气工程
操作系统
作者
Qingqing Li,Guangliang Han,Peixun Liu,Hang Yang,Dianbing Chen,Xinglong Sun,Jiajia Wu,Dongxu Liu
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:71: 1-14
被引量:10
标识
DOI:10.1109/tim.2022.3186048
摘要
Infrared and visible image fusion aims to generate an image with prominent target information and abundant texture details. Most existing methods generally rely on manually designing complex fusion rules to realize image fusion. Some deep learning fusion networks tend to ignore the correlation between different level features, which may cause loss of intensity information and texture details in the fused image. To overcome these drawbacks, we propose a multi-level hybrid transmission network for infrared and visible image fusion, which mainly contains the multi-level residual encoder module and the hybrid transmission decoder module. Considering the great difference between infrared and visible images, the multi-level residual encoder module with two independent branches is designed to extract abundant features from source images. To avoid complicated fusion strategies, the concatenate-convolution is applied to fuse features. Towards utilizing information from source images efficiently, the hybrid transmission decoder module is constructed to integrate different level features. Experimental results and analyses on three public datasets demonstrate that our method not only can achieve high quality image fusion but also performs better than comparison methods in terms of qualitative and quantitative comparisons. In addition, the proposed method has good real-time performance in infrared and visible image fusion.
科研通智能强力驱动
Strongly Powered by AbleSci AI