人工智能
图像融合
计算机科学
发电机(电路理论)
杠杆(统计)
图像(数学)
GSM演进的增强数据速率
模式识别(心理学)
小波
融合
模式
保险丝(电气)
计算机视觉
工程类
社会科学
功率(物理)
语言学
物理
哲学
量子力学
社会学
电气工程
作者
Cheng Zhao,Peng Yang,Feng Zhou,Guanghui Yue,Shuigen Wang,Huisi Wu,Guoliang Chen,Tianfu Wang,Baiying Lei
标识
DOI:10.1109/tnnls.2023.3271059
摘要
Image fusion technology aims to obtain a comprehensive image containing a specific target or detailed information by fusing data of different modalities. However, many deep learning-based algorithms consider edge texture information through loss functions instead of specifically constructing network modules. The influence of the middle layer features is ignored, which leads to the loss of detailed information between layers. In this article, we propose a multidiscriminator hierarchical wavelet generative adversarial network (MHW-GAN) for multimodal image fusion. First, we construct a hierarchical wavelet fusion (HWF) module as the generator of MHW-GAN to fuse feature information at different levels and scales, which avoids information loss in the middle layers of different modalities. Second, we design an edge perception module (EPM) to integrate edge information from different modalities to avoid the loss of edge information. Third, we leverage the adversarial learning relationship between the generator and three discriminators for constraining the generation of fusion images. The generator aims to generate a fusion image to fool the three discriminators, while the three discriminators aim to distinguish the fusion image and edge fusion image from two source images and the joint edge image, respectively. The final fusion image contains both intensity information and structure information via adversarial learning. Experiments on public and self-collected four types of multimodal image datasets show that the proposed algorithm is superior to the previous algorithms in terms of both subjective and objective evaluation.
科研通智能强力驱动
Strongly Powered by AbleSci AI