Optimizing Recurrent Neural Networks: A Study on Gradient Normalization of Weights for Enhanced Training Efficiency

规范化(社会学) 梯度下降 超参数 计算机科学 循环神经网络 困惑 人工智能 人工神经网络 梯度法 随机梯度下降算法 机器学习 算法 语言模型 人类学 社会学
作者
Xinyi Wu,Bingjie Xiang,Huaizheng Lu,Chaopeng Li,Xingwang Huang,Weifang Huang
出处
期刊:Applied sciences [Multidisciplinary Digital Publishing Institute]
卷期号:14 (15): 6578-6578 被引量:2
标识
DOI:10.3390/app14156578
摘要

Recurrent Neural Networks (RNNs) are classical models for processing sequential data, demonstrating excellent performance in tasks such as natural language processing and time series prediction. However, during the training of RNNs, the issues of vanishing and exploding gradients often arise, significantly impacting the model’s performance and efficiency. In this paper, we investigate why RNNs are more prone to gradient problems compared to other common sequential networks. To address this issue and enhance network performance, we propose a method for gradient normalization of network weights. This method suppresses the occurrence of gradient problems by altering the statistical properties of RNN weights, thereby improving training effectiveness. Additionally, we analyze the impact of weight gradient normalization on the probability-distribution characteristics of model weights and validate the sensitivity of this method to hyperparameters such as learning rate. The experimental results demonstrate that gradient normalization enhances the stability of model training and reduces the frequency of gradient issues. On the Penn Treebank dataset, this method achieves a perplexity level of 110.89, representing an 11.48% improvement over conventional gradient descent methods. For prediction lengths of 24 and 96 on the ETTm1 dataset, Mean Absolute Error (MAE) values of 0.778 and 0.592 are attained, respectively, resulting in 3.00% and 6.77% improvement over conventional gradient descent methods. Moreover, selected subsets of the UCR dataset show an increase in accuracy ranging from 0.4% to 6.0%. The gradient normalization method enhances the ability of RNNs to learn from sequential and causal data, thereby holding significant implications for optimizing the training effectiveness of RNN-based models.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Iwan完成签到,获得积分10
刚刚
2秒前
3秒前
4秒前
5秒前
许右发布了新的文献求助30
5秒前
1111发布了新的文献求助10
5秒前
ylyao发布了新的文献求助10
8秒前
YuanF发布了新的文献求助10
9秒前
10秒前
Sonder完成签到 ,获得积分10
11秒前
11秒前
Jasper应助贝贝要搞科研采纳,获得10
12秒前
lxz发布了新的文献求助10
13秒前
Lucas应助猪头采纳,获得10
15秒前
Eleven完成签到,获得积分10
16秒前
斯文败类应助跳舞的年糕采纳,获得10
18秒前
19秒前
24秒前
25秒前
小木虫完成签到,获得积分10
25秒前
魏头头完成签到 ,获得积分10
25秒前
28秒前
29秒前
贵医实验王粥张完成签到,获得积分10
30秒前
燕子发布了新的文献求助50
31秒前
隐形曼青应助追寻又柔采纳,获得10
31秒前
赵淑晴发布了新的文献求助10
33秒前
CipherSage应助lm采纳,获得10
34秒前
传奇3应助hjb采纳,获得20
34秒前
笨笨中心给笨笨中心的求助进行了留言
34秒前
1234完成签到,获得积分10
34秒前
bilibalaa完成签到 ,获得积分10
35秒前
猪头发布了新的文献求助10
35秒前
hohokuz完成签到,获得积分10
36秒前
Aierlan611完成签到,获得积分20
38秒前
清秀的怀蕊完成签到 ,获得积分10
39秒前
42秒前
smalldesk完成签到,获得积分10
42秒前
zm发布了新的文献求助10
42秒前
高分求助中
The Mother of All Tableaux: Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 3000
A new approach to the extrapolation of accelerated life test data 1000
Problems of point-blast theory 400
Indomethacinのヒトにおける経皮吸収 400
北师大毕业论文 基于可调谐半导体激光吸收光谱技术泄漏气体检测系统的研究 390
Phylogenetic study of the order Polydesmida (Myriapoda: Diplopoda) 370
Robot-supported joining of reinforcement textiles with one-sided sewing heads 320
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3997611
求助须知:如何正确求助?哪些是违规求助? 3537154
关于积分的说明 11270819
捐赠科研通 3276323
什么是DOI,文献DOI怎么找? 1806885
邀请新用户注册赠送积分活动 883576
科研通“疑难数据库(出版商)”最低求助积分说明 809975