计算机科学
发电机(电路理论)
鉴别器
图像(数学)
词(群论)
人工智能
编码(集合论)
随机性
生成语法
水准点(测量)
文本生成
光学(聚焦)
频道(广播)
代码生成
自然语言
语言模型
自然语言处理
钥匙(锁)
程序设计语言
功率(物理)
哲学
数学
计算机安全
计算机网络
语言学
探测器
光学
电信
大地测量学
量子力学
统计
物理
集合(抽象数据类型)
地理
作者
Bowen Li,Xiaojuan Qi,Thomas Lukasiewicz,Philip H. S. Torr
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:140
标识
DOI:10.48550/arxiv.1909.07083
摘要
In this paper, we propose a novel controllable text-to-image generative adversarial network (ControlGAN), which can effectively synthesise high-quality images and also control parts of the image generation according to natural language descriptions. To achieve this, we introduce a word-level spatial and channel-wise attention-driven generator that can disentangle different visual attributes, and allow the model to focus on generating and manipulating subregions corresponding to the most relevant words. Also, a word-level discriminator is proposed to provide fine-grained supervisory feedback by correlating words with image regions, facilitating training an effective generator which is able to manipulate specific visual attributes without affecting the generation of other content. Furthermore, perceptual loss is adopted to reduce the randomness involved in the image generation, and to encourage the generator to manipulate specific attributes required in the modified text. Extensive experiments on benchmark datasets demonstrate that our method outperforms existing state of the art, and is able to effectively manipulate synthetic images using natural language descriptions. Code is available at https://github.com/mrlibw/ControlGAN.
科研通智能强力驱动
Strongly Powered by AbleSci AI