情态动词
计算机科学
判别式
身份(音乐)
特征(语言学)
鉴定(生物学)
人工智能
突出
棱锥(几何)
骨干网
模式识别(心理学)
红外线的
对偶(语法数字)
计算机视觉
电信
声学
光学
物理
材料科学
哲学
艺术
文学类
生物
高分子化学
植物
语言学
作者
Zhiyuan Li,Jia Sun,Yanfeng Li,chaofan hao
摘要
Visible-Infrared person re-identification technology aims to match the target persons across the visible and infrared modalities. In this paper, we propose a visible-infrared person re-identification method based on a modal-identity dual-central loss. Modal-identity dual-central loss constrains the network to extract modal shared features by pulling in the infrared modal center and visible modal center of the same identity person, while pushing away the identity centers of different person to maintain inter-class discriminability. In addition, to extract more discriminative information, we propose a feature pyramid integration network based on efficient channel attention. Specifically, the network fuses high-level features and fine-grained low-level features to build a multi-scale feature map, and introduces an efficient channel attention module to enhance the salient features of person. Extensive experiments have been conducted to validate our proposed method on the SYSU-MM01 and RegDB datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI