模态(人机交互)
特征(语言学)
计算机科学
人工智能
水准点(测量)
模式
鉴定(生物学)
特征学习
判别式
最佳显著性理论
特征向量
光学(聚焦)
机器学习
学习迁移
组分(热力学)
模式识别(心理学)
心理学
社会学
哲学
物理
光学
热力学
生物
植物
地理
心理治疗师
语言学
社会科学
大地测量学
作者
Yan Lu,Yue Wu,Bin Liu,Tianzhu Zhang,Baopu Li,Qi Chu,Nenghai Yu
出处
期刊:Cornell University - arXiv
日期:2020-01-01
标识
DOI:10.48550/arxiv.2002.12489
摘要
Cross-modality person re-identification (cm-ReID) is a challenging but key technology for intelligent video analysis. Existing works mainly focus on learning common representation by embedding different modalities into a same feature space. However, only learning the common characteristics means great information loss, lowering the upper bound of feature distinctiveness. In this paper, we tackle the above limitation by proposing a novel cross-modality shared-specific feature transfer algorithm (termed cm-SSFT) to explore the potential of both the modality-shared information and the modality-specific characteristics to boost the re-identification performance. We model the affinities of different modality samples according to the shared features and then transfer both shared and specific features among and across modalities. We also propose a complementary feature learning strategy including modality adaption, project adversarial learning and reconstruction enhancement to learn discriminative and complementary shared and specific features of each modality, respectively. The entire cm-SSFT algorithm can be trained in an end-to-end manner. We conducted comprehensive experiments to validate the superiority of the overall algorithm and the effectiveness of each component. The proposed algorithm significantly outperforms state-of-the-arts by 22.5% and 19.3% mAP on the two mainstream benchmark datasets SYSU-MM01 and RegDB, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI