计算机科学
模态(人机交互)
人工智能
特征(语言学)
判别式
光学(聚焦)
特征学习
匹配(统计)
约束(计算机辅助设计)
计算机视觉
排名(信息检索)
相似性(几何)
任务(项目管理)
鉴定(生物学)
特征提取
公制(单位)
模式识别(心理学)
图像(数学)
数学
物理
植物
生物
哲学
运营管理
语言学
统计
几何学
管理
光学
经济
作者
Mang Ye,Xiangyuan Lan,Zheng Wang,Pong C. Yuen
标识
DOI:10.1109/tifs.2019.2921454
摘要
Visible thermal person re-identification (VT-REID) is a task of matching person images captured by thermal and visible cameras, which is an extremely important issue in night-time surveillance applications. Existing cross-modality recognition works mainly focus on learning sharable feature representations to handle the cross-modality discrepancies. However, apart from the cross-modality discrepancy caused by different camera spectrums, VT-REID also suffers from large cross-modality and intra-modality variations caused by different camera environments and human poses, and so on. In this paper, we propose a dual-path network with a novel bi-directional dual-constrained top-ranking (BDTR) loss to learn discriminative feature representations. It is featured in two aspects: 1) end-to-end learning without extra metric learning step and 2) the dual-constraint simultaneously handles the cross-modality and intra-modality variations to ensure the feature discriminability. Meanwhile, a bi-directional center-constrained top-ranking (eBDTR) is proposed to incorporate the previous two constraints into a single formula, which preserves the properties to handle both cross-modality and intra-modality variations. The extensive experiments on two cross-modality re-ID datasets demonstrate the superiority of the proposed method compared to the state-of-the-arts.
科研通智能强力驱动
Strongly Powered by AbleSci AI