A Self-Rewarding Mechanism in Deep Reinforcement Learning for Trading Strategy Optimization

强化学习 计算机科学 人工智能 功能(生物学) 机制(生物学) 机器学习 贝尔曼方程 交易策略 理论(学习稳定性) 价值(数学) 数学优化 财务 哲学 数学 认识论 进化生物学 经济 生物
作者
Yuling Huang,Chujin Zhou,Lin Zhang,Xiaoping Lu
出处
期刊:Mathematics [MDPI AG]
卷期号:12 (24): 4020-4020
标识
DOI:10.3390/math12244020
摘要

Reinforcement Learning (RL) is increasingly being applied to complex decision-making tasks such as financial trading. However, designing effective reward functions remains a significant challenge. Traditional static reward functions often fail to adapt to dynamic environments, leading to inefficiencies in learning. This paper presents a novel approach, called Self-Rewarding Deep Reinforcement Learning (SRDRL), which integrates a self-rewarding network within the RL framework. The SRDRL mechanism operates in two primary phases: First, supervised learning techniques are used to learn from expert knowledge by employing advanced time-series feature extraction models, including TimesNet and WFTNet. This step refines the self-rewarding network parameters by comparing predicted rewards with expert-labeled rewards, which are based on metrics such as Min-Max, Sharpe Ratio, and Return. In the second phase, the model selects the higher value between the expert-labeled and predicted rewards as the RL reward, storing it in the replay buffer. This combination of expert knowledge and predicted rewards enhances the performance of trading strategies. The proposed implementation, called Self-Rewarding Double DQN (SRDDQN), demonstrates that the self-rewarding mechanism improves learning and optimizes trading decisions. Experiments conducted on datasets including DJI, IXIC, and SP500 show that SRDDQN achieves a cumulative return of 1124.23% on the IXIC dataset, significantly outperforming the next best method, Fire (DQN-HER), which achieved 51.87%. SRDDQN also enhances the stability and efficiency of trading strategies, providing notable improvements over traditional RL methods. The integration of a self-rewarding mechanism within RL addresses a critical limitation in reward function design and offers a scalable, adaptable solution for complex, dynamic trading environments.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
灵巧尔云完成签到,获得积分10
1秒前
shuangshuang完成签到,获得积分10
1秒前
直角圆圈完成签到,获得积分10
1秒前
2秒前
hualidy完成签到,获得积分10
2秒前
开朗向真完成签到,获得积分10
2秒前
卡乐瑞咩吹可完成签到,获得积分10
3秒前
3秒前
夜白完成签到,获得积分0
4秒前
天天快乐应助ramu采纳,获得10
4秒前
小航完成签到,获得积分10
5秒前
无敌幸运儿完成签到,获得积分10
5秒前
diki完成签到,获得积分10
6秒前
miawei完成签到,获得积分10
6秒前
7秒前
秋海棠完成签到,获得积分10
7秒前
如意烨霖完成签到,获得积分10
8秒前
生动初蓝完成签到,获得积分10
8秒前
邵123456789完成签到,获得积分20
8秒前
剧院的饭桶完成签到,获得积分10
9秒前
ming完成签到 ,获得积分10
9秒前
科研混子表锅完成签到,获得积分10
9秒前
10秒前
shiyan完成签到,获得积分20
10秒前
平方完成签到,获得积分10
10秒前
景清完成签到,获得积分10
10秒前
11秒前
快乐的紫寒完成签到,获得积分10
11秒前
田田田田完成签到,获得积分10
11秒前
pp完成签到,获得积分10
12秒前
粤k砍柴人完成签到,获得积分10
12秒前
12秒前
jygjhgy发布了新的文献求助10
13秒前
完美世界应助牛雨桐采纳,获得10
13秒前
hzy完成签到,获得积分10
14秒前
LiangRen完成签到 ,获得积分10
14秒前
张小鱼完成签到 ,获得积分10
14秒前
jw完成签到,获得积分10
15秒前
刘师兄吧完成签到,获得积分10
16秒前
橘猫不长橘毛完成签到,获得积分10
16秒前
高分求助中
Continuum Thermodynamics and Material Modelling 4000
Production Logging: Theoretical and Interpretive Elements 2700
Les Mantodea de Guyane Insecta, Polyneoptera 1000
Unseen Mendieta: The Unpublished Works of Ana Mendieta 1000
El viaje de una vida: Memorias de María Lecea 800
Theory of Block Polymer Self-Assembly 750
Luis Lacasa - Sobre esto y aquello 700
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3510987
求助须知:如何正确求助?哪些是违规求助? 3093692
关于积分的说明 9218660
捐赠科研通 2788179
什么是DOI,文献DOI怎么找? 1530009
邀请新用户注册赠送积分活动 710726
科研通“疑难数据库(出版商)”最低求助积分说明 706329