鉴别器
计算机科学
发电机(电路理论)
人工智能
图像(数学)
亮度
特征(语言学)
编码(集合论)
钥匙(锁)
代表(政治)
模式识别(心理学)
计算机视觉
功率(物理)
电信
物理
哲学
计算机安全
集合(抽象数据类型)
量子力学
探测器
程序设计语言
语言学
政治
政治学
法学
作者
Chunyan She,Tao Chen,Shukai Duan,Lidan Wang
标识
DOI:10.1016/j.knosys.2023.111053
摘要
Low-light image enhancement (LLIE) is a common pretext task for computer vision, which aims to adjust the luminance of the low-light image to obtain the normal-light image. At present, unsupervised LLIE has been developed. However, its performance is limited due to the lack of sufficient semantic information and guidance from a strict discriminator. In this work, a semantic-aware generative adversarial network is proposed to alleviate the above limitations. We use the pre-trained VGG model on ImageNet to extract the prior semantic information, which is organically fed into the generator to refine its feature representation, and develop an adaptive image fusion strategy working on the output layer of the generator. Further, to improve the discriminator's capacity of supervising generator, we design the dual-discriminator with dense connection and two image quality-driven priority queues with time-aware. The quantitative and qualitative experiments on four testing datasets demonstrate the competitiveness of the proposed model and the effectiveness of each component. Our code is available at: https://github.com/Shecyy/SAGAN.
科研通智能强力驱动
Strongly Powered by AbleSci AI