期刊:IEEE Transactions on Industrial Informatics [Institute of Electrical and Electronics Engineers] 日期:2024-02-23卷期号:20 (5): 7754-7763被引量:5
标识
DOI:10.1109/tii.2024.3359432
摘要
Visible–infrared person reidentification (VI-ReID) aims to search for pedestrian identities in different spectra. The major challenge is the modality differences between infrared and visible images for the VI-ReID task. Existing approaches try to design networks based on a single-stage training strategy to extract features. However, they often excessively rely on a particular feature, such as modality-specific features or modality-independent features, and overlook the significance of the diverse features obtained by combining them. To address this problem, we propose a diverse-feature collaborative progressive learning network (DCPLNet) for VI-ReID in this article. With the benefit of diverse information, our DCPLNet can effectively learn informative representations for reducing the modality differences. Specifically, we propose a novel three-stage progressive learning strategy (t-PLS) to progressively learn diverse features. For the proposed t-PLS, we design a contour feature enhancement module to mine human contour features and raise a perceptual contour feature loss for supervised feature extraction. Finally, we advance a batch adaptation module to establish feature links between samples. Extensive experiments on SYSU-MM01, RegDB, and LLCM datasets demonstrate that our proposed model performs better than most state-of-the-art methods.