计算机科学
推论
灵活性(工程)
人工智能
图像融合
图像(数学)
编码(集合论)
特征提取
建筑
像素
计算机视觉
数据挖掘
模式识别(心理学)
艺术
视觉艺术
统计
数学
集合(抽象数据类型)
程序设计语言
作者
Zhu Liu,Jinyuan Liu,Guanyao Wu,Zihang Chen,Xin Fan,Risheng Liu
标识
DOI:10.1109/tcsvt.2024.3351933
摘要
In recent years, learning-based methods have achieved significant advancements in multi-exposure image fusion. However, two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference. Reliance on aligned image pairs in existing methods causes susceptibility to artifacts due to device motion. Additionally, existing techniques often rely on handcrafted architectures with huge network engineering, resulting in redundant parameters, adversely impacting inference efficiency and flexibility. To mitigate these limitations, this study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion. Specifically, targeting the extreme discrepancy of exposure, we propose the self-alignment module, leveraging scene relighting to constrain the illumination degree for following alignment and feature extraction. Detail repletion is proposed to enhance the texture details of scenes. Additionally, incorporating a hardware-sensitive constraint, we present the fusion-oriented architecture search to explore compact and efficient networks for fusion. The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios. Moreover, it significantly reduces inference time by 69.1%. The code will be available at https://github.com/LiuZhu-CV/CRMEF.
科研通智能强力驱动
Strongly Powered by AbleSci AI