计算机科学
人工智能
模态(人机交互)
特征向量
特征(语言学)
背景(考古学)
RGB颜色模型
地点
图形
模式识别(心理学)
计算机视觉
自然语言处理
理论计算机科学
生物
哲学
古生物学
语言学
作者
Jian Feng,Feng Chen,Yimu Ji,Fei Wu,Jing Sun
出处
期刊:IEEE Signal Processing Letters
[Institute of Electrical and Electronics Engineers]
日期:2021-01-01
卷期号:28: 1425-1429
被引量:18
标识
DOI:10.1109/lsp.2021.3093865
摘要
The modality and pose variance between RGB and infrared (IR) images are two key challenges for RGB-IR person re-identification. Existing methods mainly focus on leveraging pixel or feature alignment to handle the intra-class variations and cross-modality discrepancy. However, these methods are hard to keep semantic identity consistency between global and local representation, which the consistency is important for the cross-modality pedestrian re-identification task. In this work, we propose a novel cross-modality graph reasoning method (CGRNet) to globally model and reason over relations between modalities and context, and to keep semantic identity consistency between global and local representation. Specifically, we propose a local modality-similarity module to put the distribution of modality-specific features into a common subspace without losing identity information. Besides, we squeeze the input feature of RGB and IR images into a channel-wise global vector, and through graph reasoning, the identity relationship and modality relationship in each vector are inferred. Extensive experiments on two datasets demonstrate the superior performance of our approach over the existing state-of-the-art. The code is available at https://github.com/fegnyujian/CGRNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI