素描
计算机科学
图像翻译
杠杆(统计)
人工智能
生成语法
班级(哲学)
对抗制
图像合成
计算机图形学
生成对抗网络
图像(数学)
绘图
对象(语法)
计算机视觉
模式识别(心理学)
计算机图形学(图像)
算法
作者
Zeyu Li,Cheng Deng,Erkun Yang,Dacheng Tao
标识
DOI:10.1109/tmm.2020.3015015
摘要
Sketch-based image synthesis is a challenging problem in computer graphics and vision. Existing approaches either require exact edge maps or rely on the retrieval of existing photographs, which limits their applications in real-world scenarios. Accordingly in this work, we propose a staged semi-supervised generative adversarial networks based method for sketch-to-image synthesis, which can directly generate realistic images from novice sketches. More specifically, we first adopt a conditional generative adversarial network (CGAN) to extract class-wise representations from unpaired images. These class-wise representations are then exploited and incorporated with another CGAN, which are used to generate realistic images from sketches. By incorporating the class-wise representations, our method can leverage both the general class information from unpaired images and the targeted object information from input sketches. Additionally, this network architecture also enables us to take full advantage of widely available unpaired images and learn more accurate class representations. Extensive experiments demonstrate, compared with state-of-the-art image translation methods, our approach can achieve more promising results and synthesize images with significantly better Inception Scores and Fréchet Inception Distance.
科研通智能强力驱动
Strongly Powered by AbleSci AI