人工智能
计算机视觉
计算机科学
图像(数学)
级联
图像质量
图像融合
模式识别(心理学)
图像处理
融合
工程类
语言学
化学工程
哲学
作者
Yeyao Chen,Mei Yu,Gangyi Jiang,Zongju Peng,Fen Chen
标识
DOI:10.1016/j.jvcir.2019.04.008
摘要
A single-exposure image may lose details because of the imaging dynamic range limitations of single camera sensor. Multi-image fusion techniques are often used to improve the image quality, but if there are moving objects in the scene, the fused images may result in ghost artifacts. In order to avoid this problem and enhance single-exposure images, this paper proposes a dual network cascade model for single image enhancement, including exposure prediction network and exposure fusion network. First, the exposure prediction network generates two under-/over-exposure images that differ from the input normal-exposure image so as to recover the lost details of the under-exposed/over-exposed regions. Then, the exposure fusion network fuses the input image and the generated under-/over-exposure images to generate the final enhanced image. The loss function constructed by a structural dissimilarity index is used to alleviate chessboard artifacts in the generated image. Further, through three-phase training, the model robustly generates enhanced images without any post-processing. The experimental results demonstrate that the proposed method can effectively improve the image contrast and reconstruct details of under-exposed/over-exposed regions in the original image.
科研通智能强力驱动
Strongly Powered by AbleSci AI