计算机科学
图像翻译
翻译(生物学)
可扩展性
人工智能
编码器
领域(数学分析)
推论
图像(数学)
模式识别(心理学)
编码(内存)
计算机视觉
数学
操作系统
信使核糖核酸
数学分析
基因
数据库
化学
生物化学
作者
Siyu Huang,Jie An,Donglai Wei,Zudi Lin,Jiebo Luo,Hanspeter Pfister
标识
DOI:10.1109/tpami.2023.3287774
摘要
Unpaired image-to-image translation (UNIT) aims to map images between two visual domains without paired training data. However, given a UNIT model trained on certain domains, it is difficult for current methods to incorporate new domains because they often need to train the full model on both existing and new domains. To address this problem, we propose a new domain-scalable UNIT method, termed as latent space anchoring , which can be efficiently extended to new visual domains and does not need to fine-tune encoders and decoders of existing domains. Our method anchors images of different domains to the same latent space of frozen GANs by learning lightweight encoder and regressor models to reconstruct single-domain images. In the inference phase, the learned encoders and decoders of different domains can be arbitrarily combined to translate images between any two domains without fine-tuning. Experiments on various datasets show that the proposed method achieves superior performance on both standard and domain-scalable UNIT tasks in comparison with the state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI