服装
计算机科学
代表(政治)
人工智能
对抗制
特征学习
背景(考古学)
特征提取
机器学习
鉴定(生物学)
模式识别(心理学)
政治
历史
古生物学
生物
考古
法学
植物
政治学
作者
Yu-Jhe Li,Xinshuo Weng,Kris Kitani
出处
期刊:Workshop on Applications of Computer Vision
日期:2021-01-01
被引量:44
标识
DOI:10.1109/wacv48630.2021.00248
摘要
Person re-identification (re-ID) aims to recognize instances of the same person contained in multiple images taken across different cameras. Existing methods for re-ID tend to rely heavily on the assumption that both query and gallery images of the same person have the same clothing. Unfortunately, this assumption may not hold for datasets captured over long periods of time. To tackle the re-ID problem in the context of clothing changes, we propose a novel representation learning method which is able to generate a shape-based feature representation that is invariant to clothing. We call our model the Clothing Agnostic Shape Extraction Network (CASE-Net). CASE-Net learns a representation of a person that depends primarily on shape via adversarial learning and feature disentanglement. Quantitative and qualitative results across 5 datasets (Div-Market, Market1501, three large-scale datasets under clothing changes) show our approach makes significant improvements over prior state-of-the-art approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI