规范化(社会学)
计算机科学
人工智能
模式识别(心理学)
比例(比率)
计算机视觉
地理
地图学
社会科学
社会学
作者
Andrew Brock,Soham De,Samuel Smith,Karen Simonyan
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:220
标识
DOI:10.48550/arxiv.2102.06171
摘要
Batch normalization is a key component of most image classification models, but it has many undesirable properties stemming from its dependence on the batch size and interactions between examples. Although recent work has succeeded in training deep ResNets without normalization layers, these models do not match the test accuracies of the best batch-normalized networks, and are often unstable for large learning rates or strong data augmentations. In this work, we develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer-Free ResNets. Our smaller models match the test accuracy of an EfficientNet-B7 on ImageNet while being up to 8.7x faster to train, and our largest models attain a new state-of-the-art top-1 accuracy of 86.5%. In addition, Normalizer-Free models attain significantly better performance than their batch-normalized counterparts when finetuning on ImageNet after large-scale pre-training on a dataset of 300 million labeled images, with our best models obtaining an accuracy of 89.2%. Our code is available at https://github.com/deepmind/ deepmind-research/tree/master/nfnets
科研通智能强力驱动
Strongly Powered by AbleSci AI