图像翻译
图像(数学)
翻译(生物学)
人工智能
计算机科学
一致性(知识库)
领域(数学分析)
对象(语法)
对抗制
集合(抽象数据类型)
计算机视觉
模式识别(心理学)
数学
化学
程序设计语言
数学分析
信使核糖核酸
基因
生物化学
作者
Jun-Yan Zhu,Taesung Park,Phillip Isola,Alexei A. Efros
出处
期刊:Cornell University - arXiv
日期:2017-01-01
被引量:1232
标识
DOI:10.48550/arxiv.1703.10593
摘要
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI