图像翻译
计算机科学
正规化(语言学)
翻译(生物学)
人工智能
图像(数学)
发电机(电路理论)
计算机视觉
模式识别(心理学)
生物化学
量子力学
基因
信使核糖核酸
物理
功率(物理)
化学
作者
Chao Yang,Tae‐Hwan Kim,Ruizhe Wang,Hao Peng,C.-C. Jay Kuo
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2019-10-01
卷期号:28 (10): 4845-4856
被引量:34
标识
DOI:10.1109/tip.2019.2914583
摘要
Image translation between two domains is a class of problems aiming to learn mapping from an input image in the source domain to an output image in the target domain. It has been applied to numerous applications, such as data augmentation, domain adaptation, and unsupervised training. When paired training data is not accessible, image translation becomes an ill-posed problem. We constrain the problem with the assumption that the translated image needs to be perceptually similar to the original image and also appears to be drawn from the new domain, and propose a simple yet effective image translation model consisting of a single generator trained with a self-regularization term and an adversarial term. We further notice that the existing image translation techniques are agnostic to the subjects of interest and often introduce unwanted changes or artifacts to the input. Thus, we propose to add an attention module to predict an attention map to guide the image translation process. The module learns to attend to key parts of the image while keeping everything else unaltered, essentially avoiding undesired artifacts or changes. Extensive experiments and evaluations show that our model while being simpler, achieves significantly better performance than existing image translation methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI