计算机科学
元学习(计算机科学)
人工智能
任务(项目管理)
机器学习
目标检测
分割
人工神经网络
适应(眼睛)
建筑
光学
物理
艺术
视觉艺术
经济
管理
作者
Thomas Elsken,Benedikt Staffler,Jan Hendrik Metzen,Frank Hutter
标识
DOI:10.1109/cvpr42600.2020.01238
摘要
The recent progress in neural architecture search (NAS) has allowed scaling the automated design of neural architectures to real-world domains, such as object detection and semantic segmentation. However, one prerequisite for the application of NAS are large amounts of labeled data and compute resources. This renders its application challenging in few-shot learning scenarios, where many related tasks need to be learned, each with limited amounts of data and compute time. Thus, few-shot learning is typically done with a fixed neural architecture. To improve upon this, we propose MetaNAS, the first method which fully integrates NAS with gradient-based meta-learning. MetaNAS optimizes a meta-architecture along with the meta-weights during meta-training. During meta-testing, architectures can be adapted to a novel task with a few steps of the task optimizer, that is: task adaptation becomes computationally cheap and requires only little data per task. Moreover, MetaNAS is agnostic in that it can be used with arbitrary model-agnostic meta-learning algorithms and arbitrary gradient-based NAS methods. Empirical results on standard few-shot classification benchmarks show that MetaNAS with a combination of DARTS and REPTILE yields state-of-the-art results.
科研通智能强力驱动
Strongly Powered by AbleSci AI