计算机科学
嵌入
人工智能
稳健性(进化)
特征(语言学)
代表(政治)
服装
任务(项目管理)
经济
法学
管理
化学
考古
语言学
政治学
哲学
基因
历史
政治
生物化学
作者
Zan Gao,Hongwei Wei,Weili Guan,Weizhi Nie,Meng Liu,Meng Wang
标识
DOI:10.1145/3503161.3547884
摘要
To date, only a few works have focused on the cloth-changing person Re-identification (ReID) task, but since it is very difficult to extract generalized and robust features for representing people with different clothes, thus, their performances need to be improved. Moreover, visual-semantic information is also often ignored. To solve these issues, in this work, a novel multigranular visual-semantic embedding algorithm (MVSE) is proposed for cloth-changing person ReID, where visual semantic information and human attributes are embedded into the network, and the generalized features of human appearance can be well learned to effectively solve the problem of cloth-changing. Specifically, to fully represent a person with clothing changes, a multigranular feature representation scheme (MGR) is employed to adaptively extract multilevel and multigranular feature information, and then a cloth desensitization network (CDN) is designed to improve the feature robustness for the person with different clothes, where different high-level human attributes are fully utilized. Moreover, to further solve the issue of pose changes and occlusion under different camera perspectives, a partially semantically aligned network (PSA) is proposed to obtain the visual-semantic information that is used to align the human attributes. Most importantly, these three modules are jointly explored in a unified framework. Extensive experimental results on four cloth-changing person ReID datasets demonstrate that the MVSE algorithm can extract highly robust feature representations of cloth-changing persons, and it can outperform state-of-the-art cloth-changing person ReID approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI