计算机科学
判别式
特征学习
人工智能
编码器
聚类分析
领域(数学分析)
变压器
自编码
模式识别(心理学)
水准点(测量)
特征向量
成对比较
机器学习
深度学习
工程类
数学分析
数学
大地测量学
电压
地理
电气工程
操作系统
作者
Ran Wei,Jianyang Gu,Shuting He,Wei Jiang
标识
DOI:10.1109/tits.2022.3225025
摘要
Fully-supervised vehicle re-identification (re-ID) methods are faced with performance degradation when applied to new image domains. Therefore, developing unsupervised domain adaptation (UDA) to transfer the knowledge from learned source domain to new unlabeled target domain becomes an indispensable task. It is challenging because different domains have various image appearances, such as different backgrounds, illuminations and resolutions, especially when cameras have different viewpoints. To tackle this domain gap issue, a novel Transformer-based Domain-Specific Representation learning network (TDSR) is proposed to dynamically focus on corresponding detailed hints for each domain. Specifically, with the source and target domain being trained simultaneously, a domain encoding module is proposed to introduce domain information into the network. The original features of source and target domains are enriched with these domain encodings first, and then sequentially processed by a Transformer encoder to model contextual information and a decoder to summarize the encoded features into the final domain-specific feature representations. Moreover, we propose a Contrastive Clustering Loss (CCL) to directly optimize the distribution of features at cluster level. Instances are overall pulled closer to the prototype of the same identity, and pushed farther from the prototypes of different identities. It helps compact the clusters in the latent space and improve the discriminative capability of the network, leading to more accurate pseudo-label assignment in TDSR. Our method outperforms the state-of-the-art UDA methods on vehicle re-ID benchmark datasets VeRi and VehicleID on both real-world to real-world and synthetic to real-world settings.
科研通智能强力驱动
Strongly Powered by AbleSci AI