计算机科学
鉴别器
发电机(电路理论)
背景(考古学)
语义学(计算机科学)
情态动词
人工智能
图像(数学)
计算机视觉
模式识别(心理学)
电信
程序设计语言
生物
高分子化学
探测器
量子力学
物理
化学
功率(物理)
古生物学
作者
Hongchen Tan,Baocai Yin,Kaiqiang Xu,Huasheng Wang,Xiuping Liu,Xin Li
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2023-12-28
卷期号:34 (7): 5400-5413
标识
DOI:10.1109/tcsvt.2023.3347971
摘要
We propose a novel Text-to-Image Generation Network, Attention-bridged Modal Interaction Generative Adversarial Network (AMI-GAN), to better explore modal interaction and perception for high-quality image synthesis. The AMI-GAN contains two novel designs: an Attention-bridged Modal Interaction (AMI) module and a Residual Perception Discriminator (RPD). In AMI, we mainly design a multi-scale attention mechanism to exploit semantics alignment, fusion, and enhancement between text and image, to better refine details and context semantics of the synthesized image. In RPD, we design a multi-scale information perception mechanism with our proposed novel information adjustment function, to encourage the discriminator to better perceive visual differences between the real and synthesized image. Consequently, the discriminator will drive the generator to improve the visual quality of the synthesized image. Besides, based on these novel designs, we can design two versions, a single-stage generation framework (AMI-GAN-S), and a multi-stage generation framework (AMI-GAN-M), respectively. The former can synthesize high-resolution images because of its low computational cost; the latter can synthesize images with realistic detail. Experimental results on two widely used T2I datasets showed that our AMI-GANs achieve competitive performance in T2I task.
科研通智能强力驱动
Strongly Powered by AbleSci AI