Deep reinforcement learning for portfolio management

计算机科学 强化学习 文件夹 背景(考古学) 投资组合优化 项目组合管理 资产(计算机安全) 资产配置 任务(项目管理) 人工智能 机器学习 财务 经济 计算机安全 古生物学 管理 生物 项目管理
作者
Shantian Yang
出处
期刊:Knowledge Based Systems [Elsevier BV]
卷期号:278: 110905-110905 被引量:12
标识
DOI:10.1016/j.knosys.2023.110905
摘要

Portfolio management facilitates trading off risks against returns for multiple financial assets. Reinforcement Learning (RL) is one of the most promising algorithms for portfolio management. However, these state-of-the-art RL algorithms only complete the task of portfolio management, i.e., acquire the different asset features of portfolio, without considering the global context information from portfolio, which leads to non-optimal portfolio representations; Moreover, the corresponding optimizations are implemented using only the loss function in the viewpoint of RL, without considering the relationships between the local asset information and global context embeddings, which leads to non-optimal portfolio policies. To deal with these issues, this paper proposes a Task-Context Mutual Actor–Critic (TC-MAC) algorithm for portfolio management. Specifically, TC-MAC algorithm is developed based on: (1) representation learning introduces a proposed Task-Context (TC) learning algorithm, which not only encodes the task (i.e., acquire different asset features) of portfolio, but also encodes the global dynamic context of portfolio, thus which helps to learn optimal portfolio embeddings; (2) policy learning introduces a proposed Mutual Actor–Critic (MAC) framework, which can measure the relationships between local embedding of each asset and global context embeddings by maximizing mutual information, the corresponding Mutual-Information loss function combines with RL loss function (i.e., Actor–Critic loss) to collectively optimize the whole algorithm, thus which helps to learn optimal portfolio policies. Experimental results on real-world datasets demonstrate the superior performance of TC-MAC algorithm over the well-known traditional portfolio methods and these state-of-the-art RL algorithms, at the same time, show its advantageous transferability.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
ri_290完成签到,获得积分10
1秒前
zy完成签到,获得积分10
5秒前
自由的聋五完成签到,获得积分10
6秒前
11111111111完成签到,获得积分10
8秒前
科研通AI6.4应助fancy采纳,获得10
10秒前
yingtiao完成签到 ,获得积分10
11秒前
蓝天碧海小西服完成签到,获得积分0
11秒前
maxinyu完成签到 ,获得积分10
13秒前
谦让的慕凝完成签到 ,获得积分10
14秒前
16秒前
coconut完成签到,获得积分10
18秒前
木刻青、发布了新的文献求助20
21秒前
WooKawai发布了新的文献求助10
21秒前
24秒前
EdithYune完成签到,获得积分10
25秒前
小赖想睡觉完成签到,获得积分10
26秒前
123123发布了新的文献求助10
28秒前
深情安青应助Cheish采纳,获得10
35秒前
36秒前
38秒前
xl完成签到,获得积分10
38秒前
yuchangkun发布了新的文献求助10
40秒前
yongen完成签到,获得积分10
40秒前
WooKawai完成签到,获得积分10
40秒前
lllll1243完成签到,获得积分10
42秒前
不知道完成签到,获得积分10
42秒前
星辰大海应助苹果元槐采纳,获得10
44秒前
YK完成签到,获得积分0
44秒前
yuchangkun完成签到,获得积分10
46秒前
Albert_Z完成签到,获得积分10
47秒前
温暖霸完成签到,获得积分10
47秒前
47秒前
YONG完成签到,获得积分10
48秒前
123123完成签到,获得积分10
48秒前
51秒前
王道远完成签到,获得积分10
51秒前
53秒前
大气思柔完成签到 ,获得积分10
53秒前
54秒前
YONG完成签到,获得积分10
54秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
PowerCascade: A Synthetic Dataset for Cascading Failure Analysis in Power Systems 2000
Various Faces of Animal Metaphor in English and Polish 800
Signals, Systems, and Signal Processing 610
Unlocking Chemical Thinking: Reimagining Chemistry Teaching and Learning 555
Photodetectors: From Ultraviolet to Infrared 500
On the Dragon Seas, a sailor's adventures in the far east 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6355811
求助须知:如何正确求助?哪些是违规求助? 8170527
关于积分的说明 17201160
捐赠科研通 5411774
什么是DOI,文献DOI怎么找? 2864385
邀请新用户注册赠送积分活动 1841922
关于科研通互助平台的介绍 1690224