计算机科学
判别式
人工智能
编码(集合论)
比例(比率)
特征学习
块(置换群论)
适应(眼睛)
鉴定(生物学)
域适应
特征工程
模式识别(心理学)
机器学习
特征(语言学)
深度学习
卷积神经网络
程序设计语言
集合(抽象数据类型)
光学
哲学
物理
几何学
分类器(UML)
生物
量子力学
植物
语言学
数学
作者
Kaiyang Zhou,Yongxin Yang,Andrea Cavallaro,Tao Xiang
标识
DOI:10.1109/tpami.2021.3069237
摘要
An effective person re-identification (re-ID) model should learn feature representations that are both discriminative, for distinguishing similar-looking people, and generalisable, for deployment across datasets without any adaptation. In this paper, we develop novel CNN architectures to address both challenges. First, we present a re-ID CNN termed omni-scale network (OSNet) to learn features that not only capture different spatial scales but also encapsulate a synergistic combination of multiple scales, namely omni-scale features. The basic building block consists of multiple convolutional streams, each detecting features at a certain scale. For omni-scale feature learning, a unified aggregation gate is introduced to dynamically fuse multi-scale features with channel-wise weights. OSNet is lightweight as its building blocks comprise factorised convolutions. Second, to improve generalisable feature learning, we introduce instance normalisation (IN) layers into OSNet to cope with cross-dataset discrepancies. Further, to determine the optimal placements of these IN layers in the architecture, we formulate an efficient differentiable architecture search algorithm. Extensive experiments show that, in the conventional same-dataset setting, OSNet achieves state-of-the-art performance, despite being much smaller than existing re-ID models. In the more challenging yet practical cross-dataset setting, OSNet beats most recent unsupervised domain adaptation methods without using any target data. Our code and models are released at https://github.com/KaiyangZhou/deep-person-reid .
科研通智能强力驱动
Strongly Powered by AbleSci AI