服装
计算机科学
风格(视觉艺术)
发电机(电路理论)
人工智能
图像(数学)
忠诚
人工神经网络
视觉艺术
艺术
电信
功率(物理)
物理
考古
量子力学
历史
作者
Shuhui Jiang,Jun Li,Yun Fu
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2021-02-26
卷期号:33 (9): 4538-4550
被引量:30
标识
DOI:10.1109/tnnls.2021.3057892
摘要
In this article, we work on generating fashion style images with deep neural network algorithms. Given a garment image, and single or multiple style images (e.g., flower, blue and white porcelain), it is a challenge to generate a synthesized clothing image with single or mix-and-match styles due to the need to preserve global clothing contents with coverable styles, to achieve high fidelity of local details, and to conform different styles with specific areas. To address this challenge, we propose a fashion style generator (FashionG) framework for the single-style generation and a spatially constrained FashionG (SC-FashionG) framework for mix-and-match style generation. Both FashionG and SC-FashionG are end-to-end feedforward neural networks that consist of a generator for image transformation and a discriminator for preserving content and style globally and locally. Specifically, a global-based loss is calculated based on full images, which can preserve the global clothing form and design. A patch-based loss is calculated based on image patches, which can preserve detailed local style patterns. We develop an alternating patch-global optimization methodology to minimize these losses. Compared with FashionG, SC-FashionG employs an additional spatial constraint to ensure that each style is blended only onto a specific area of the clothing image. Extensive experiments demonstrate the effectiveness of both single-style and mix-and-match style generations.
科研通智能强力驱动
Strongly Powered by AbleSci AI