判别式
嵌入
计算机科学
子空间拓扑
线性子空间
变压器
人工智能
分类器(UML)
特征学习
多集
模式识别(心理学)
机器学习
理论计算机科学
数学
工程类
几何学
组合数学
电压
电气工程
作者
Wen Li,Cheng Zou,Meng Wang,F. R. Xu,Jianan Zhao,Ruobing Zheng,Yuan Cheng,Wei Chu
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence
[Association for the Advancement of Artificial Intelligence (AAAI)]
日期:2023-06-26
卷期号:37 (2): 1415-1423
被引量:28
标识
DOI:10.1609/aaai.v37i2.25226
摘要
In person re-identification (ReID) task, it is still challenging to learn discriminative representation by deep learning, due to limited data. Generally speaking, the model will get better performance when increasing the amount of data. The addition of similar classes strengthens the ability of the classifier to identify similar identities, thereby improving the discrimination of representation. In this paper, we propose a Diverse and Compact Transformer (DC-Former) that can achieve a similar effect by splitting embedding space into multiple diverse and compact subspaces. Compact embedding subspace helps model learn more robust and discriminative embedding to identify similar classes. And the fusion of these diverse embeddings containing more fine-grained information can further improve the effect of ReID. Specifically, multiple class tokens are used in vision transformer to represent multiple embedding spaces. Then, a self-diverse constraint (SDC) is applied to these spaces to push them away from each other, which makes each embedding space diverse and compact. Further, a dynamic weight controller (DWC) is further designed for balancing the relative importance among them during training. The experimental results of our method are promising, which surpass previous state-of-the-art methods on several commonly used person ReID benchmarks. Our code is available at https://github.com/ant-research/Diverse-and-Compact-Transformer.
科研通智能强力驱动
Strongly Powered by AbleSci AI