计算机科学
人工智能
生成对抗网络
鉴定(生物学)
质量(理念)
图像(数学)
身份(音乐)
机器学习
对抗制
物理
认识论
声学
生物
哲学
植物
作者
Wu Tian,Rongbo Zhu,Shaohua Wan
摘要
Generative adversarial networks (GANs)-based person re-identification (re-id) schemes provide potential ways to augment data in practical applications. However, existing solutions perform poorly because of the separation of data generation and re-id training and a lack of diverse data in real-world scenarios. In this paper, a person re-id model (IDGAN) based on semantic map guided identity transfer GAN is proposed to improve the person re-id performance. With the aid of the semantic map, IDGAN generates pedestrian images with varying poses, perspectives, and backgrounds efficiently and accurately, improving the diversity of training data. To increase the visual realism, IDGAN utilizes a gradient augmentation method based on local quality attention to refine the generated image locally. Then, a two-stage joint training framework is employed to allow the GAN and the person re-id network to learn from each other to better use the generated data. Detailed experimental results demonstrate that, compared with the existing state-of-the-art methods, IDGAN is capable of producing high-quality images and significantly enhancing re-id performance, with the FID of generated images on the Market-1501 dataset being reduced by 1.15, and mAP on the Market-1501 and DukeMTMC-reID datasets being increased by 3.3% and 2.6%, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI