计算机科学
人工智能
计算机视觉
编码(集合论)
特征(语言学)
源代码
图像(数学)
颜色恒定性
过程(计算)
模式识别(心理学)
语言学
操作系统
哲学
集合(抽象数据类型)
程序设计语言
作者
Jinyuan Liu,Guanyao Wu,Junsheng Luan,Zhiying Jiang,Risheng Liu,Xin Fan
标识
DOI:10.1016/j.inffus.2023.02.027
摘要
Multi-exposure image fusion (MEF) targets to integrate multiple shots with different exposures and generates a single higher dynamic image than each. Existing deep learning-based MEF approaches only adopt reference high dynamic images (HDR) as positive samples to guide the training of fusion networks. However, simply relying on these positive samples are difficult to find the optimal parameters for the network as a whole. Thus, the structure or texture information is blurred or missed in the generated HDR results. Moreover, few approaches attempted to prevent illumination degeneration during the fusion process, resulting in poor color saturation on their fused results. To address such limitations, in this paper, we introduce a novel holistic and local constraint built upon contrastive learning, namely HoLoCo, to discover the intrinsic information of both source LDR images and the reference HDR one. In this manner, the generated fused images are pulled to the HDR image and pushed away from the LDR source images in both image-based and path-based latent feature spaces. Besides, inspired by Retinex theory, we propose a color correction module (CCM) to refine illumination features. CCM involves dual streams that can collaborate to ensure the nature of color information and details consistency. Extensive experiments on two datasets show that our HoLoCo can continuously generate visual-appealing HDR results with precise detail and vivid color rendition, performing favorably against the state-of-the-art MEF approaches. Source code is available in github https://github.com/JinyuanLiu-CV/HoLoCo.
科研通智能强力驱动
Strongly Powered by AbleSci AI