计算机科学
人工智能
鉴定(生物学)
模态(人机交互)
计算机视觉
模式识别(心理学)
特征提取
作者
Jun Kong,Qibin He,Min Jiang,Tianshan Liu
出处
期刊:IEEE Signal Processing Letters
[Institute of Electrical and Electronics Engineers]
日期:2021-09-24
卷期号:28: 2003-2007
标识
DOI:10.1109/lsp.2021.3115040
摘要
Visible-infrared person re-identification (VI-ReID) is a challenging cross-modality pedestrian retrieval task which aims to match person images between the visible and infrared modality of the same identity. Existing methods usually adopt two-stream network to solve cross-modality gap, but they ignore the pixel-level discrepancy between the visible and infrared images. Some methods introduce auxiliary modalities in the network, but they lack powerful constraints on the feature distribution of multiple modalities. In this letter, we propose a Dynamic Center Aggregation (DCA) loss with mixed modality for VI-ReID. Concretely, we employ a mixed modality as a bridge between the visible and infrared modality, reducing the difference of the two modalities at the pixel-level. The mixed modality is generated by a Dual-modality Feature Mixer (DFM), which combines the features of visible and infrared images. Moreover, we dynamically adjust the relative distance across multi-modality through DCA loss, which is conducive to explore the modality-invariant feature. We evaluate the proposed method on two public available VI-ReID datasets (SYSU-MM01 and RegDB). Experimental results demonstrate that our method achieves competitive performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI