计算机科学
素描
人工智能
联营
特征(语言学)
地点
卷积(计算机科学)
模式识别(心理学)
算法
人工神经网络
语言学
哲学
作者
Heng Liu,Xu Yao,Feng Chen
标识
DOI:10.1016/j.engappai.2022.105608
摘要
Sketch-to-image synthesis aims to generate realistic images that match the input sketches or edge maps exactly. Most known sketch-to-image synthesis methods use various generative adversarial networks (GANs) that are trained with numerous pairs of sketches and real images. Because of the convolution locality, the low-level layers of the generators in these GANs lack global perception ability, causing feature maps derived from them easily to overlook global cues. Since the global receptive field is crucial for acquiring the non-local structures and features of sketches, the absence of global contexts will impact the generation of high-quality images. Some recent models turn to self-attention to construct global dependencies. However, they are not viable for large feature maps for the quadratic computational complexity concerning the size of feature maps. To address these problems, in this work, we propose Sketch2Photo — a new image synthesis approach that can capture global contexts as well as local features to generate photo-realistic images from weak or partial sketches or edge maps. We employ fast Fourier convolution (FFC) residual blocks to create global receptive fields in the bottom layers of the network and incorporate Swin Transformer block (STB) units to obtain long-range global contexts for large-size feature maps efficiently. We also present an improved spatial attention pooling (ISAP) module to relax the strict alignment requirements between incomplete sketches and generated images. Quantitative and qualitative experiments on multiple public datasets demonstrate the superiority of the proposed approach over many other sketch-to-image synthesis methods. The project code is available at https://github.com/hengliusky/Skecth2Photo.
科研通智能强力驱动
Strongly Powered by AbleSci AI