计算机科学
过度拟合
杠杆(统计)
自举(财务)
利用
人工智能
机器学习
学习迁移
代表(政治)
标记数据
钥匙(锁)
班级(哲学)
半监督学习
数据点
数据挖掘
数学
人工神经网络
计算机安全
政治
政治学
法学
计量经济学
作者
Sylvestre-Alvise Rebuffi,Sébastien Ehrhardt,Kai Han,Andrea Vedaldi,Andrew Zisserman
标识
DOI:10.1109/cvprw50498.2020.00389
摘要
While semi-supervised learning (SSL) algorithms provide an efficient way to make use of both labelled and unlabelled data, they generally struggle when the number of annotated samples is very small. In this work, we consider the problem of SSL multi-class classification with very few labelled instances. We introduce two key ideas. The first is a simple but effective one: we leverage the power of transfer learning among different tasks and self-supervision to initialize a good representation of the data without making use of any label. The second idea is a new algorithm for SSL that can exploit well such a pre-trained representation. The algorithm works by alternating two phases, one fitting the labelled points and one fitting the unlabelled ones, with carefully-controlled information flow between them. The benefits are greatly reducing overfitting of the labelled data and avoiding issue with balancing labelled and unlabelled losses during training. We show empirically that this method can successfully train competitive models with as few as 10 labelled data points per class. More in general, we show that the idea of bootstrapping features using self-supervised learning always improves SSL on standard benchmarks. We show that our algorithm works increasingly well compared to other methods when refining from other tasks or datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI