规范化(社会学)
计算机科学
可视化
人工神经网络
一般化
曲率
信息丢失
网络体系结构
人工智能
数学
几何学
人类学
计算机安全
数学分析
社会学
作者
Hao Li,Zheng Xu,Gavin Taylor,Christoph Studer,Tom Goldstein
出处
期刊:Neural Information Processing Systems
日期:2018-02-15
卷期号:31: 6391-6401
被引量:544
标识
DOI:10.3929/ethz-b-000461393
摘要
Neural network training relies on our ability to find good minimizers of highly non-convex loss functions. It is well known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effect on the underlying loss landscape, is not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple filter normalization method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.
科研通智能强力驱动
Strongly Powered by AbleSci AI