基本事实
人工智能
计算机科学
深度学习
保险丝(电气)
公制(单位)
图像(数学)
块(置换群论)
模式识别(心理学)
集合(抽象数据类型)
图像融合
甲骨文公司
计算机视觉
机器学习
数学
工程类
几何学
电气工程
经济
软件工程
程序设计语言
运营管理
作者
K. Ram Prabhakar,V. Sai Srikar,R. Venkatesh Babu
出处
期刊:International Conference on Computer Vision
日期:2017-10-01
卷期号:: 4724-4732
被引量:561
标识
DOI:10.1109/iccv.2017.505
摘要
We present a novel deep learning architecture for fusing static multi-exposure images. Current multi-exposure fusion (MEF) approaches use hand-crafted features to fuse input sequence. However, the weak hand-crafted representations are not robust to varying input conditions. Moreover, they perform poorly for extreme exposure image pairs. Thus, it is highly desirable to have a method that is robust to varying input conditions and capable of handling extreme exposure without artifacts. Deep representations have known to be robust to input conditions and have shown phenomenal performance in a supervised setting. However, the stumbling block in using deep learning for MEF was the lack of sufficient training data and an oracle to provide the ground-truth for supervision. To address the above issues, we have gathered a large dataset of multi-exposure image stacks for training and to circumvent the need for ground truth images, we propose an unsupervised deep learning framework for MEF utilizing a no-reference quality metric as loss function. The proposed approach uses a novel CNN architecture trained to learn the fusion operation without reference ground truth image. The model fuses a set of common low level features extracted from each image to generate artifact-free perceptually pleasing results. We perform extensive quantitative and qualitative evaluation and show that the proposed technique outperforms existing state-of-the-art approaches for a variety of natural images.
科研通智能强力驱动
Strongly Powered by AbleSci AI