计算机科学
人工智能
编码器
判别式
编码(集合论)
模式识别(心理学)
机器学习
代表(政治)
假阳性悖论
假阳性和假阴性
集合(抽象数据类型)
政治
政治学
法学
程序设计语言
操作系统
作者
Xiao Wang,Yuhang Huang,Dan Zeng,Guo-Jun Qi
标识
DOI:10.1109/tpami.2023.3262608
摘要
As a representative self-supervised method, contrastive learning has achieved great successes in unsupervised training of representations. It trains an encoder by distinguishing positive samples from negative ones given query anchors. These positive and negative samples play critical roles in defining the objective to learn the discriminative encoder, avoiding it from learning trivial features. While existing methods heuristically choose these samples, we present a principled method where both positive and negative samples are directly learnable end-to-end with the encoder. We show that the positive and negative samples can be cooperatively and adversarially learned by minimizing and maximizing the contrastive loss, respectively. This yields cooperative positives and adversarial negatives with respect to the encoder, which are updated to continuously track the learned representation of the query anchors over mini-batches. The proposed method achieves 71.3% and 75.3% in top-1 accuracy respectively over 200 and 800 epochs of pre-training ResNet-50 backbone on ImageNet1K without tricks such as multi-crop or stronger augmentations. With Multi-Crop, it can be further boosted into 75.7%. The source code and pre-trained model are released in https://github.com/maple-research-lab/caco.
科研通智能强力驱动
Strongly Powered by AbleSci AI