清晨好,您是今天最早来到科研通的研友!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您科研之路漫漫前行!

Understanding the difficulty of training deep feedforward neural networks

初始化 计算机科学 人工神经网络 人工智能 深层神经网络 深度学习 梯度下降 雅可比矩阵与行列式 乙状窦函数 机器学习 数学 应用数学 程序设计语言
作者
Xavier Glorot,Yoshua Bengio
摘要

Whereas before 2006 it appears that deep multilayer neural networks were not successfully trained, since then several algorithms have been shown to successfully train them, with experimental results showing the superiority of deeper vs less deep architectures. All these experimental results were obtained with new initialization or training mechanisms. Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future. We first observe the influence of the non-linear activations functions. We find that the logistic sigmoid activation is unsuited for deep networks with random initialization because of its mean value, which can drive especially the top hidden layer into saturation. Surprisingly, we find that saturated units can move out of saturation by themselves, albeit slowly, and explaining the plateaus sometimes seen when training neural networks. We find that a new non-linearity that saturates less can often be beneficial. Finally, we study how activations and gradients vary across layers and during training, with the idea that training may be more difficult when the singular values of the Jacobian associated with each layer are far from 1. Based on these considerations, we propose a new initialization scheme that brings substantially faster convergence. 1 Deep Neural Networks Deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower level features. They include Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: WC Weston et al., 2008). Much attention has recently been devoted to them (see (Bengio, 2009) for a review), because of their theoretical appeal, inspiration from biology and human cognition, and because of empirical success in vision (Ranzato et al., 2007; Larochelle et al., 2007; Vincent et al., 2008) and natural language processing (NLP) (Collobert & Weston, 2008; Mnih & Hinton, 2009). Theoretical results reviewed and discussed by Bengio (2009), suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need deep architectures. Most of the recent experimental results with deep architecture are obtained with models that can be turned into deep supervised neural networks, but with initialization or training schemes different from the classical feedforward neural networks (Rumelhart et al., 1986). Why are these new algorithms working so much better than the standard random initialization and gradient-based optimization of a supervised training criterion? Part of the answer may be found in recent analyses of the effect of unsupervised pretraining (Erhan et al., 2009), showing that it acts as a regularizer that initializes the parameters in a “better” basin of attraction of the optimization procedure, corresponding to an apparent local minimum associated with better generalization. But earlier work (Bengio et al., 2007) had shown that even a purely supervised but greedy layer-wise procedure would give better results. So here instead of focusing on what unsupervised pre-training or semi-supervised criteria bring to deep architectures, we focus on analyzing what may be going wrong with good old (but deep) multilayer neural networks. Our analysis is driven by investigative experiments to monitor activations (watching for saturation of hidden units) and gradients, across layers and across training iterations. We also evaluate the effects on these of choices of activation function (with the idea that it might affect saturation) and initialization procedure (since unsupervised pretraining is a particular form of initialization and it has a drastic impact).
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
10秒前
王杨发布了新的文献求助10
14秒前
量子星尘发布了新的文献求助10
20秒前
完美世界应助科研通管家采纳,获得10
50秒前
pK完成签到 ,获得积分10
2分钟前
2分钟前
Raul完成签到 ,获得积分10
2分钟前
量子星尘发布了新的文献求助10
2分钟前
完美世界应助Drwang采纳,获得10
2分钟前
3分钟前
Drwang发布了新的文献求助10
3分钟前
研友_VZG7GZ应助Drwang采纳,获得10
3分钟前
3分钟前
量子星尘发布了新的文献求助10
3分钟前
方白秋完成签到,获得积分10
3分钟前
4分钟前
Drwang发布了新的文献求助10
4分钟前
量子星尘发布了新的文献求助10
4分钟前
xuchaoqun完成签到 ,获得积分10
4分钟前
郭伟完成签到,获得积分10
5分钟前
5分钟前
张琦完成签到 ,获得积分10
5分钟前
chichenglin发布了新的文献求助10
5分钟前
gszy1975完成签到,获得积分10
5分钟前
量子星尘发布了新的文献求助10
6分钟前
if奖完成签到,获得积分10
6分钟前
领导范儿应助科研通管家采纳,获得10
6分钟前
widesky777完成签到 ,获得积分0
7分钟前
JamesPei应助着急的松采纳,获得10
7分钟前
2520完成签到 ,获得积分10
7分钟前
量子星尘发布了新的文献求助10
7分钟前
碳土不凡完成签到 ,获得积分10
7分钟前
qiuqiu发布了新的文献求助10
8分钟前
nojego完成签到,获得积分10
8分钟前
冰凌心恋完成签到,获得积分10
8分钟前
qiuqiu完成签到 ,获得积分10
8分钟前
大医仁心完成签到 ,获得积分10
8分钟前
科研通AI5应助科研通管家采纳,获得10
8分钟前
8分钟前
张张发布了新的文献求助10
9分钟前
高分求助中
【提示信息,请勿应助】关于scihub 10000
The Mother of All Tableaux: Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 3000
Social Research Methods (4th Edition) by Maggie Walter (2019) 2390
A new approach to the extrapolation of accelerated life test data 1000
北师大毕业论文 基于可调谐半导体激光吸收光谱技术泄漏气体检测系统的研究 390
Phylogenetic study of the order Polydesmida (Myriapoda: Diplopoda) 370
Robot-supported joining of reinforcement textiles with one-sided sewing heads 360
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4008429
求助须知:如何正确求助?哪些是违规求助? 3548151
关于积分的说明 11298711
捐赠科研通 3282900
什么是DOI,文献DOI怎么找? 1810274
邀请新用户注册赠送积分活动 885976
科研通“疑难数据库(出版商)”最低求助积分说明 811209