计算机科学
人工智能
机器学习
提取器
特征(语言学)
管道(软件)
班级(哲学)
特征提取
对比度(视觉)
监督学习
模式识别(心理学)
人工神经网络
工程类
哲学
程序设计语言
语言学
工艺工程
作者
Taemin Lee,Sungjoo Yoo
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2021-01-01
卷期号:9: 61466-61474
被引量:17
标识
DOI:10.1109/access.2021.3074525
摘要
Few-shot learning deals with a small amount of data which incurs insufficient performance with conventional cross-entropy loss. We propose a pretraining approach for few-shot learning scenarios. That is, considering that the feature extractor quality is a critical factor in few-shot learning, we augment the feature extractor using a contrastive learning technique. It is reported that supervised contrastive learning applied to base class training in transductive few-shot training pipeline leads to improved results, outperforming the state-of-the-art methods on Mini-ImageNet and CUB. Furthermore, our experiment shows that a much larger dataset is needed to retain few-shot classification accuracy when domain-shift degradation exists, and if our method is applied, the need for a large dataset is eliminated. The accuracy gain can be translated to a runtime reduction of $3.87\times $ in a resource-constrained environment.
科研通智能强力驱动
Strongly Powered by AbleSci AI