计算机科学
人工智能
交叉熵
杠杆(统计)
机器学习
超参数
边距(机器学习)
稳健性(进化)
嵌入
模式识别(心理学)
特征学习
自然语言处理
生物化学
基因
化学
作者
Prannay Khosla,Piotr Teterwak,Chen Wang,Aaron Sarna,Yonglong Tian,Phillip Isola,Aaron Maschinot,Ce Liu,Dilip Krishnan
出处
期刊:Neural Information Processing Systems
日期:2020-04-23
卷期号:33: 18661-18673
被引量:273
摘要
Contrastive learning applied to self-supervised representation learning has
seen a resurgence in recent years, leading to state of the art performance in
the unsupervised training of deep image models. Modern batch contrastive
approaches subsume or significantly outperform traditional contrastive losses
such as triplet, max-margin and the N-pairs loss. In this work, we extend the
self-supervised batch contrastive approach to the fully-supervised setting,
allowing us to effectively leverage label information. Clusters of points
belonging to the same class are pulled together in embedding space, while
simultaneously pushing apart clusters of samples from different classes. We
analyze two possible versions of the supervised contrastive (SupCon) loss,
identifying the best-performing formulation of the loss. On ResNet-200, we
achieve top-1 accuracy of 81.4% on the ImageNet dataset, which is 0.8% above
the best number reported for this architecture. We show consistent
outperformance over cross-entropy on other datasets and two ResNet variants.
The loss shows benefits for robustness to natural corruptions and is more
stable to hyperparameter settings such as optimizers and data augmentations.
Our loss function is simple to implement, and reference TensorFlow code is
released at https://t.ly/supcon.
科研通智能强力驱动
Strongly Powered by AbleSci AI