人工智能
图像融合
模式识别(心理学)
特征(语言学)
融合
计算机科学
图像(数学)
约束(计算机辅助设计)
缩小
模态(人机交互)
传感器融合
数学
哲学
语言学
几何学
程序设计语言
作者
Farshad G. Veshki,Nora Ouzir,Sergiy A. Vorobyov,Esa Ollila
标识
DOI:10.1016/j.sigpro.2022.108637
摘要
This paper presents a multimodal image fusion method using a novel decomposition model based on coupled dictionary learning. The proposed method is general and can be used for a variety of imaging modalities. In particular, the images to be fused are decomposed into correlated and uncorrelated components using sparse representations with identical supports and a Pearson correlation constraint, respectively. The resulting optimization problem is solved by an alternating minimization algorithm. Contrary to other learning-based fusion methods, the proposed approach does not require any training data, and the correlated features are extracted online from the data itself. By preserving the uncorrelated components in the fused images, the proposed fusion method significantly improves on current fusion approaches in terms of maintaining the texture details and modality-specific information. The maximum-absolute-value rule is used for the fusion of correlated components only. This leads to an enhanced contrast-resolution without causing intensity attenuation or loss of important information. Experimental results show that the proposed method achieves superior performance in terms of both visual and objective evaluations compared to state-of-the-art image fusion methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI