Over the past decade, deep neural networks (DNNs) have been widely used due to their remarkable accuracy in real-world applications. However, this increase in accuracy often results in computationally expensive models and high memory usage, leading to longer prediction latencies and exorbitant release costs. In this paper, we propose a new adaptive layer normalization (ALN) algorithm for transformer models, which tackles the computational, and memory problems encountered by traditional layer normalization (LN) techniques. The proposed method computes and stores statistical moments during the training and uses them directly during the inference phase, allowing the normalization layer to be merged with the nearest linear layer. The result is a significant acceleration in inference time, by up to 29%. In classification tasks, our evaluations on the ImageNet dataset show an improvement in accuracy of 0.1%, while maintaining comparable accuracy in object detection tasks on the COCO reference dataset. The proposed ALN algorithm is a simple and effective solution for improving the inference time of pre-trained transformer models, making it a valuable tool for natural language processing and computer vision tasks.