初始化
计算机科学
概括性
任务(项目管理)
可转让性
人工智能
一般化
图层(电子)
人工神经网络
卷积神经网络
模式识别(心理学)
深度学习
机器学习
数学
心理学
数学分析
化学
管理
有机化学
罗伊特
经济
心理治疗师
程序设计语言
作者
Jason Yosinski,Jeff Clune,Yoshua Bengio,Hod Lipson
出处
期刊:Cornell University - arXiv
日期:2014-01-01
被引量:3401
标识
DOI:10.48550/arxiv.1411.1792
摘要
Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI