模态(人机交互)
特征(语言学)
判别式
计算机科学
人工智能
模式识别(心理学)
水准点(测量)
补偿(心理学)
计算机视觉
心理学
精神分析
大地测量学
语言学
哲学
地理
作者
Qiang Zhang,Changzhou Lai,Jianan Liu,Nianchang Huang,Jungong Han
标识
DOI:10.1109/cvpr52688.2022.00720
摘要
For Visible-Infrared person ReIDentification (VI-ReID), existing modality-specific information compensation based models try to generate the images of missing modality from existing ones for reducing cross-modality discrepancy. However, because of the large modality discrepancy between visible and infrared images, the generated images usually have low qualities and introduce much more interfering information (e.g., color inconsistency). This greatly degrades the subsequent VI-ReID performance. Alternatively, we present a novel Feature-level Modality Compensation Network (FMCNet) for VI-ReID in this paper, which aims to compensate the missing modality-specific information in the feature level rather than in the image level, i.e., directly generating those missing modality-specific features of one modality from existing modality-shared features of the other modality. This will enable our model to mainly generate some discriminative person related modality-specific features and discard those non-discriminative ones for benefiting VI-ReID. For that, a single-modality feature decomposition module is first designed to decompose single-modality features into modality-specific ones and modality-shared ones. Then, a feature-level modality compensation module is present to generate those missing modality-specific features from existing modality-shared ones. Finally, a shared-specific feature fusion module is proposed to combine the existing and generated features for VI-ReID. The effectiveness of our proposed model is verified on two benchmark datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI