规范化(社会学)
计算机科学
推论
变压器
人工智能
人工神经网络
机器学习
统计推断
模式识别(心理学)
数据挖掘
电压
数学
社会学
物理
统计
量子力学
人类学
作者
Fekhr Eddine Keddous,Arcadi Llanza,Nadiya Shvai,Amir Nakib
标识
DOI:10.1016/j.neucom.2024.128524
摘要
Over the past decade, deep neural networks (DNNs) have been widely used due to their remarkable accuracy in real-world applications. However, this increase in accuracy often results in computationally expensive models and high memory usage, leading to longer prediction latencies and exorbitant release costs. In this paper, we propose a new adaptive layer normalization (ALN) algorithm for transformer models, which tackles the computational, and memory problems encountered by traditional layer normalization (LN) techniques. The proposed method computes and stores statistical moments during the training and uses them directly during the inference phase, allowing the normalization layer to be merged with the nearest linear layer. The result is a significant acceleration in inference time, by up to 29%. In classification tasks, our evaluations on the ImageNet dataset show an improvement in accuracy of 0.1%, while maintaining comparable accuracy in object detection tasks on the COCO reference dataset. The proposed ALN algorithm is a simple and effective solution for improving the inference time of pre-trained transformer models, making it a valuable tool for natural language processing and computer vision tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI