人工智能
计算机视觉
计算机科学
图像融合
融合
图像(数学)
上下文图像分类
模式识别(心理学)
哲学
语言学
作者
Jiayang Li,Junjun Jiang,Pengwei Liang,Jiayi Ma,Liqiang Nie
标识
DOI:10.1109/tip.2025.3541562
摘要
In this paper, we introduce MaeFuse, a novel autoencoder model designed for Infrared and Visible Image Fusion (IVIF). The existing approaches for image fusion often rely on training combined with downstream tasks to obtain high-level visual information, which is effective in emphasizing target objects and delivering impressive results in visual quality and task-specific applications. Instead of being driven by downstream tasks, our model called MaeFuse utilizes a pretrained encoder from Masked Autoencoders (MAE), which facilities the omni features extraction for low-level reconstruction and high-level vision tasks, to obtain perception friendly features with a low cost. In order to eliminate the domain gap of different modal features and the block effect caused by the MAE encoder, we further develop a guided training strategy. This strategy is meticulously crafted to ensure that the fusion layer seamlessly adjusts to the feature space of the encoder, gradually enhancing the fusion performance. The proposed method can facilitate the comprehensive integration of feature vectors from both infrared and visible modalities, thus preserving the rich details inherent in each modal. MaeFuse not only introduces a novel perspective in the realm of fusion techniques but also stands out with impressive performance across various public datasets. The code is available at https://github.com/Henry-Lee-real/MaeFuse.
科研通智能强力驱动
Strongly Powered by AbleSci AI