判别式
计算机科学
人工智能
推论
稳健性(进化)
模式识别(心理学)
聚类分析
水准点(测量)
领域(数学分析)
域适应
鉴定(生物学)
关系(数据库)
机器学习
数据挖掘
数学
基因
分类器(UML)
数学分析
生物
植物
化学
生物化学
地理
大地测量学
作者
Shuang Li,Fan Li,Jinxing Li,Huafeng Li,Bob Zhang,Dapeng Tao,Xinbo Gao
标识
DOI:10.1109/tnnls.2023.3281504
摘要
Domain adaptation person re-identification (Re-ID) is a challenging task, which aims to transfer the knowledge learned from the labeled source domain to the unlabeled target domain. Recently, some clustering-based domain adaptation Re-ID methods have achieved great success. However, these methods ignore the inferior influence on pseudo-label prediction due to the different camera styles. The reliability of the pseudo-label plays a key role in domain adaptation Re-ID, while the different camera styles bring great challenges for pseudo-label prediction. To this end, a novel method is proposed, which bridges the gap of different cameras and extracts more discriminative features from an image. Specifically, an intra-to-intermechanism is introduced, in which samples from their own cameras are first grouped and then aligned at the class level across different cameras followed by our logical relation inference (LRI). Thanks to these strategies, the logical relationship between simple classes and hard classes is justified, preventing sample loss caused by discarding the hard samples. Furthermore, we also present a multiview information interaction (MvII) module that takes features of different images from the same pedestrian as patch tokens, obtaining the global consistency of a pedestrian that contributes to the discriminative feature extraction. Unlike the existing clustering-based methods, our method employs a two-stage framework that generates reliable pseudo-labels from the views of the intracamera and intercamera, respectively, to differentiate the camera styles, subsequently increasing its robustness. Extensive experiments on several benchmark datasets show that the proposed method outperforms a wide range of state-of-the-art methods. The source code has been released at https://github.com/lhf12278/LRIMV.
科研通智能强力驱动
Strongly Powered by AbleSci AI