鉴别器
水准点(测量)
灵活性(工程)
基本事实
深度学习
计算机科学
图像(数学)
人工智能
图像融合
感知
模式识别(心理学)
计算机视觉
数学
电信
生物
探测器
统计
大地测量学
神经科学
地理
作者
Yifan Jiang,Xinyu Gong,Ding Liu,Yu Cheng,Fang Chen,Xiaohui Shen,Shuicheng Yan,Pan Zhou,Zhangyang Wang
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2021-01-01
卷期号:30: 2340-2349
被引量:1155
标识
DOI:10.1109/tip.2021.3051462
摘要
Deep learning-based methods have achieved remarkable success in image restoration and enhancement, but are they still competitive when there is a lack of paired training data? As one such example, this paper explores the low-light image enhancement problem, where in practice it is extremely challenging to simultaneously take a low-light and a normal-light photo of the same visual scene. We propose a highly effective unsupervised generative adversarial network, dubbed EnlightenGAN, that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images. Instead of supervising the learning using ground truth data, we propose to regularize the unpaired training using the information extracted from the input itself, and benchmark a series of innovations for the low-light image enhancement problem, including a global-local discriminator structure, a self-regularized perceptual loss fusion, and the attention mechanism. Through extensive experiments, our proposed approach outperforms recent methods under a variety of metrics in terms of visual quality and subjective user study. Thanks to the great flexibility brought by unpaired training, EnlightenGAN is demonstrated to be easily adaptable to enhancing real-world images from various domains. Our codes and pre-trained models are available at: https://github.com/VITA-Group/EnlightenGAN.
科研通智能强力驱动
Strongly Powered by AbleSci AI