修补
计算机科学
人工智能
基本事实
发电机(电路理论)
一致性(知识库)
图像(数学)
背景(考古学)
特征(语言学)
自动汇总
生成语法
模式识别(心理学)
深度学习
生成模型
生物
物理
哲学
量子力学
古生物学
语言学
功率(物理)
作者
Weiwei Cai,Zhanguo Wei
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2020-01-01
卷期号:8: 48451-48463
被引量:125
标识
DOI:10.1109/access.2020.2979348
摘要
The latest methods based on deep learning have achieved amazing results regarding the complex work of inpainting large missing areas in an image. But this type of method generally attempts to generate one single “optimal” result, ignoring many other plausible results. Considering the uncertainty of the inpainting task, one sole result can hardly be regarded as a desired regeneration of the missing area. In view of this weakness, which is related to the design of the previous algorithms, we propose a novel deep generative model equipped with a brand new style extractor which can extract the style feature (latent vector) from the ground truth. Once obtained, the extracted style feature and the ground truth are both input into the generator. We also craft a consistency loss that guides the generated image to approximate the ground truth. After iterations, our generator is able to learn the mapping of styles corresponding to multiple sets of vectors. The proposed model can generate a large number of results consistent with the context semantics of the image. Moreover, we evaluated the effectiveness of our model on three datasets, i.e., CelebA, PlantVillage, and MauFlex. Compared to state-of-the-art inpainting methods, this model is able to offer desirable inpainting results with both better quality and higher diversity. The code and model will be made available on https://github.com/vivitsai/PiiGAN .
科研通智能强力驱动
Strongly Powered by AbleSci AI