超参数
计算机科学
机器学习
人工智能
监督学习
代表(政治)
编码(集合论)
标记数据
质量(理念)
国家(计算机科学)
人工神经网络
认识论
哲学
政治
集合(抽象数据类型)
程序设计语言
法学
政治学
算法
作者
Enrico Fini,Victor G. Turrisi da Costa,Xavier Alameda-Pineda,Elisa Ricci,Karteek Alahari,Julien Mairal
标识
DOI:10.1109/cvpr52688.2022.00940
摘要
Self-supervised models have been shown to produce comparable or better visual representations than their su-pervised counterparts when trained offline on unlabeled data at scale. However, their efficacy is catastrophically reduced in a Continual Learning (CL) scenario where data is presented to the model sequentially. In this paper, we show that self-supervised loss functions can be seamlessly converted into distillation mechanisms for CL by adding a predictor network that maps the current state of the repre-sentations to their past state. This enables us to devise a framework for Continual self-supervised visual representation Learning that (i) significantly improves the quality of the learned representations, (ii) is compatible with several state-of-the-art self-supervised objectives, and (iii) needs little to no hyperparameter tuning. We demonstrate the ef-fectiveness of our approach empirically by training six pop-ular self-supervised models in various CL settings. Code: github.com/DonkeyShot21/cassle.
科研通智能强力驱动
Strongly Powered by AbleSci AI