解耦(概率)
图像融合
融合
红外线的
计算机科学
计算机视觉
人工智能
图像(数学)
光学
物理
工程类
控制工程
语言学
哲学
作者
Xue Wang,Zheng Guan,Shishuang Yu,Jinde Cao,Ya Li
标识
DOI:10.1109/tim.2022.3216413
摘要
In general, the goal of existing infrared and visible image fusion (IVIF) methods is to make the fused image contain both the high-contrast regions of the infrared image and the texture details of the visible image. However, this definition would lead the fusion image losing information from the visible image in high-contrast areas. For this problem, this paper proposed a decoupling network-based IVIF method (DNFusion), which utilizes the decoupled maps to design additional constraints on the network to force the network to retain the saliency information of the source image effectively. The current definition of image fusion is satisfied while effectively maintaining the saliency objective of the source images. Specifically, the feature interaction module inside effectively facilitates the information exchange within the encoder and improves the utilization of complementary information. Also, a hybrid loss function constructed with weight fidelity loss, gradient loss, and decoupling loss which ensures the fusion image to be generated to effectively preserves the source image's texture details and luminance information. The qualitative and quantitative comparison of extensive experiments demonstrates that our model can generate a fused image containing saliency objects and clear details of the source images, and the method we proposed has a better performance than other state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI