图像融合
分而治之算法
光学(聚焦)
人工智能
计算机科学
图像(数学)
小波
离散小波变换
组分(热力学)
模式识别(心理学)
融合
计算机视觉
特征(语言学)
小波变换
频域
领域(数学分析)
算法
数学
数学分析
语言学
哲学
物理
光学
热力学
作者
Zhiliang Wu,Kang Zhang,Hanyu Xuan,Xia Yuan,Chunxia Zhao
标识
DOI:10.1016/j.image.2023.116982
摘要
Multi-focus image fusion aims to generate an all-in-focus image from multiple images focused on different regions. The targets of multi-focus image fusion vary with different regions of the image, e.g., flat regions retain smoothness, while edges and textures should be sharpened. However, the existing deep learning-based multi-focus image fusion methods usually treat the image as a whole for fusion and train the model by optimizing homogenous pixel-wise loss (e.g., MSE). This leads to the trained model tends to generate flat regions that are easy to reconstruct, failing to infer realistic details. In this paper, we propose a component divide-and-conquer model for multi-focus image fusion, which uses discrete wavelet transform to decompose source images into low-frequency and high-frequency components, and feeds them into different branches to aggregate them separately by the proposed attention feature fusion network. Finally, the fused image is obtained by inverse discrete wavelet transform. Such a strategy not only can address the challenges of different difficulties in the fusion of low-frequency and high-frequency components, but also able to supervise different components flexibly by the intermediate supervision learning strategy to generate realistic details of the fused image. Extensive experiments show that the proposed component divide-and-conquer model achieves significant improvements in both quantitative and qualitative evaluation.
科研通智能强力驱动
Strongly Powered by AbleSci AI