初始化
启发式
计算机科学
算法
人工智能
程序设计语言
作者
Lu Lu,Yeonjong Shin,Yanhui Su,George Em Karniadakis
出处
期刊:Communications in Computational Physics
[Global Science Press]
日期:2020-06-01
卷期号:28 (5): 1671-1706
被引量:112
标识
DOI:10.4208/cicp.oa-2020-0165
摘要
Recent theoretical work has demonstrated that deep neural networks have superior performance over shallow networks, but their training is more difficult, e.g., they suffer from the vanishing gradient problem. This problem can be typically resolved by the rectified linear unit (ReLU) activation. However, here we show that even for such activation, deep and narrow neural networks (NNs) will converge to erroneous mean or median states of the target function depending on the loss with high probability. Deep and narrow NNs are encountered in solving partial differential equations with high-order derivatives. We demonstrate this collapse of such NNs both numerically and theoretically, and provide estimates of the probability of collapse. We also construct a diagram of a safe region for designing NNs that avoid the collapse to erroneous states. Finally, we examine different ways of initialization and normalization that may avoid the collapse problem. Asymmetric initializations may reduce the probability of collapse but do not totally eliminate it.
科研通智能强力驱动
Strongly Powered by AbleSci AI