图像翻译
翻译(生物学)
计算机科学
光学(聚焦)
图像(数学)
人工智能
块(置换群论)
领域(数学分析)
发电机(电路理论)
计算机视觉
图像质量
模式识别(心理学)
数学
数学分析
生物化学
化学
信使核糖核酸
基因
物理
几何学
功率(物理)
量子力学
光学
作者
Jin Zhao,Feifei Lee,Chunyan Hu,Hongliu Yu,Chen Qiu
标识
DOI:10.1016/j.neucom.2022.07.084
摘要
Recently, image-to-image translation has attracted the interest of researchers, which purpose is to learn a mapping between two image domains. However, image translation will become an intrinsically ill-posed problem when given unpaired training data, that is, there are infinite mappings between two domains. Existing methods usually fail to learn a relatively accurate mapping, leading to poor quality of generated results. We believe that if the framework can focus more on the translation of important object regions instead of irrelevant information, such as background, then the difficulty of mapping learning will be reduced. In this paper, we propose a lightweight domain-attention generative adversarial network (LDA-GAN) for unpaired image-to-image translation, which has fewer parameters and lower memory usage. An improved domain-attention module (DAM) is introduced to establish a long-range dependency between two domains. Thus, the generator can focus more on the relevant regions to generate more realistic images. Furthermore, a novel separable-residual block (SRB) is designed to retain depth and spatial information during the translation with a lower computational cost. Extensive experiments show the effectiveness of our model on various image translation tasks according to qualitative and quantitative evaluation.
科研通智能强力驱动
Strongly Powered by AbleSci AI