过度拟合
辍学(神经网络)
计算机科学
水准点(测量)
背景(考古学)
集合(抽象数据类型)
人工神经网络
模式识别(心理学)
人工智能
多样性(控制论)
前馈神经网络
前馈
探测器
适应(眼睛)
特征(语言学)
机器学习
深度学习
工程类
心理学
古生物学
哲学
神经科学
控制工程
生物
程序设计语言
地理
电信
语言学
大地测量学
作者
Geoffrey E. Hinton,Nitish Srivastava,Alex Krizhevsky,Ilya Sutskever,Ruslan Salakhutdinov
出处
期刊:Cornell University - arXiv
日期:2012-07-03
被引量:1979
摘要
When a large feedforward neural network is trained on a small training set,
it typically performs poorly on held-out test data. This overfitting is
greatly reduced by randomly omitting half of the feature detectors on each
training case. This prevents complex co-adaptations in which a feature detector
is only helpful in the context of several other specific feature detectors.
Instead, each neuron learns to detect a feature that is generally helpful for
producing the correct answer given the combinatorially large variety of
internal contexts in which it must operate. Random dropout gives big
improvements on many benchmark tasks and sets new records for speech and object
recognition.
科研通智能强力驱动
Strongly Powered by AbleSci AI