规范化(社会学)
计算机科学
人工智能
机器学习
深层神经网络
人工神经网络
人类学
社会学
作者
Lei Huang,Jie Qin,Yi Zhou,Fan Zhu,Li Liu,Ling Shao
标识
DOI:10.1109/tpami.2023.3250241
摘要
Normalization techniques are essential for accelerating the training and improving the generalization of deep neural networks (DNNs), and have successfully been used in various applications. This paper reviews and comments on the past, present and future of normalization methods in the context of DNN training. We provide a unified picture of the main motivation behind different approaches from the perspective of optimization, and present a taxonomy for understanding the similarities and differences between them. Specifically, we decompose the pipeline of the most representative normalizing activation methods into three components: the normalization area partitioning, normalization operation and normalization representation recovery. In doing so, we provide insight for designing new normalization technique. Finally, we discuss the current progress in understanding normalization methods, and provide a comprehensive review of the applications of normalization for particular tasks, in which it can effectively solve the key issues.
科研通智能强力驱动
Strongly Powered by AbleSci AI