Optimizing Recurrent Neural Networks: A Study on Gradient Normalization of Weights for Enhanced Training Efficiency

规范化(社会学) 梯度下降 超参数 计算机科学 循环神经网络 困惑 人工智能 人工神经网络 梯度法 随机梯度下降算法 机器学习 算法 语言模型 社会学 人类学
作者
Xinyi Wu,Bingjie Xiang,Huaizheng Lu,Chaopeng Li,Xingwang Huang,Weifang Huang
出处
期刊:Applied sciences [MDPI AG]
卷期号:14 (15): 6578-6578 被引量:2
标识
DOI:10.3390/app14156578
摘要

Recurrent Neural Networks (RNNs) are classical models for processing sequential data, demonstrating excellent performance in tasks such as natural language processing and time series prediction. However, during the training of RNNs, the issues of vanishing and exploding gradients often arise, significantly impacting the model’s performance and efficiency. In this paper, we investigate why RNNs are more prone to gradient problems compared to other common sequential networks. To address this issue and enhance network performance, we propose a method for gradient normalization of network weights. This method suppresses the occurrence of gradient problems by altering the statistical properties of RNN weights, thereby improving training effectiveness. Additionally, we analyze the impact of weight gradient normalization on the probability-distribution characteristics of model weights and validate the sensitivity of this method to hyperparameters such as learning rate. The experimental results demonstrate that gradient normalization enhances the stability of model training and reduces the frequency of gradient issues. On the Penn Treebank dataset, this method achieves a perplexity level of 110.89, representing an 11.48% improvement over conventional gradient descent methods. For prediction lengths of 24 and 96 on the ETTm1 dataset, Mean Absolute Error (MAE) values of 0.778 and 0.592 are attained, respectively, resulting in 3.00% and 6.77% improvement over conventional gradient descent methods. Moreover, selected subsets of the UCR dataset show an increase in accuracy ranging from 0.4% to 6.0%. The gradient normalization method enhances the ability of RNNs to learn from sequential and causal data, thereby holding significant implications for optimizing the training effectiveness of RNN-based models.

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
小m发布了新的文献求助10
1秒前
叶帆完成签到,获得积分10
1秒前
大龙哥886应助科研通管家采纳,获得10
1秒前
彭于晏应助科研通管家采纳,获得10
1秒前
隐形曼青应助科研通管家采纳,获得10
1秒前
Akim应助科研通管家采纳,获得10
1秒前
大龙哥886应助科研通管家采纳,获得10
1秒前
完美世界应助科研通管家采纳,获得10
1秒前
乐乐应助科研通管家采纳,获得30
1秒前
共享精神应助科研通管家采纳,获得10
1秒前
星辰大海应助科研通管家采纳,获得10
2秒前
FashionBoy应助科研通管家采纳,获得10
2秒前
2秒前
人九完成签到 ,获得积分10
2秒前
浮游应助科研通管家采纳,获得10
2秒前
大龙哥886应助科研通管家采纳,获得10
2秒前
酷波er应助科研通管家采纳,获得10
2秒前
华仔应助科研通管家采纳,获得10
2秒前
科研通AI2S应助科研通管家采纳,获得10
2秒前
yznfly应助科研通管家采纳,获得150
2秒前
科研通AI6应助科研通管家采纳,获得10
2秒前
小马甲应助科研通管家采纳,获得10
2秒前
Hello应助科研通管家采纳,获得10
2秒前
852应助科研通管家采纳,获得30
2秒前
orixero应助科研通管家采纳,获得10
2秒前
彭于晏应助科研通管家采纳,获得10
2秒前
科研通AI6应助科研通管家采纳,获得10
2秒前
wanci应助科研通管家采纳,获得10
2秒前
浮游应助科研通管家采纳,获得10
2秒前
小徐发布了新的文献求助10
3秒前
3秒前
milikki完成签到,获得积分10
4秒前
4秒前
4秒前
5秒前
5秒前
5秒前
小伊001完成签到,获得积分10
5秒前
儒雅静柏完成签到,获得积分10
7秒前
Chengzhu7发布了新的文献求助10
8秒前
高分求助中
Clinical Microbiology Procedures Handbook, Multi-Volume, 5th Edition 临床微生物学程序手册,多卷,第5版 2000
List of 1,091 Public Pension Profiles by Region 1621
Les Mantodea de Guyane: Insecta, Polyneoptera [The Mantids of French Guiana] | NHBS Field Guides & Natural History 1500
The Victim–Offender Overlap During the Global Pandemic: A Comparative Study Across Western and Non-Western Countries 1000
Lloyd's Register of Shipping's Approach to the Control of Incidents of Brittle Fracture in Ship Structures 1000
Brittle fracture in welded ships 1000
King Tyrant 720
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5588355
求助须知:如何正确求助?哪些是违规求助? 4671484
关于积分的说明 14787308
捐赠科研通 4625063
什么是DOI,文献DOI怎么找? 2531787
邀请新用户注册赠送积分活动 1500349
关于科研通互助平台的介绍 1468300