计算机科学
元学习(计算机科学)
初始化
人工智能
机器学习
任务(项目管理)
强化学习
适应(眼睛)
一般化
多任务学习
数学分析
程序设计语言
管理
经济
物理
光学
数学
作者
Seung‐Chul Baik,Junghoon Oh,Seung Hwa Hong,Kyoung Mu Lee
标识
DOI:10.1109/tpami.2021.3102098
摘要
Few-shot learning is an emerging yet challenging problem in which the goal is to achieve generalization from only few examples. Meta-learning tackles few-shot learning via the learning of prior knowledge shared across tasks and using it to learn new tasks. One of the most representative meta-learning algorithms is the model-agnostic meta-learning (MAML), which formulates prior knowledge as a common initialization, a shared starting point from where a learner can quickly adapt to unseen tasks. However, forcibly sharing an initialization can lead to conflicts among tasks and the compromised (undesired by tasks) location on optimization landscape, thereby hindering task adaptation. Furthermore, the degree of conflict is observed to vary not only among the tasks but also among the layers of a neural network. Thus, we propose task-and-layer-wise attenuation on the compromised initialization to reduce its adverse influence on task adaptation. As attenuation dynamically controls (or selectively forgets) the influence of the compromised prior knowledge for a given task and each layer, we name our method Learn to Forget (L2F). Experimental results demonstrate that the proposed method greatly improves the performance of the state-of-the-art MAML-based frameworks across diverse domains: few-shot classification, cross-domain few-shot classification, regression, reinforcement learning, and visual tracking.
科研通智能强力驱动
Strongly Powered by AbleSci AI