计算机科学
强化学习
订单(交换)
任务(项目管理)
协议(科学)
操作员(生物学)
人工智能
财务
转录因子
基因
病理
抑制因子
经济
医学
化学
管理
替代医学
生物化学
作者
Yuchen Fang,Zhenggang Tang,Kan Ren,Weiqing Liu,Li Zhao,Jiang Bian,Dongsheng Li,Weinan Zhang,Yong Yu,Tie‐Yan Liu
标识
DOI:10.1145/3580305.3599856
摘要
Order execution is a fundamental task in quantitative finance, aiming at finishing acquisition or liquidation for a number of trading orders of the specific assets. Recent advance in model-free reinforcement learning (RL) provides a data-driven solution to the order execution problem. However, the existing works always optimize execution for an individual order, overlooking the practice that multiple orders are specified to execute simultaneously, resulting in suboptimality and bias. In this paper, we first present a multi-agent RL (MARL) method for multi-order execution considering practical constraints. Specifically, we treat every agent as an individual operator to trade one specific order, while keeping communicating with each other and collaborating for maximizing the overall profits. Nevertheless, the existing MARL algorithms often incorporate communication among agents by exchanging only the information of their partial observations, which is inefficient in complicated financial market. To improve collaboration, we then propose a learnable multi-round communication protocol, for the agents communicating the intended actions with each other and refining accordingly. It is optimized through a novel action value attribution method which is provably consistent with the original learning objective yet more efficient. The experiments on the data from two real-world markets have illustrated superior performance with significantly better collaboration effectiveness achieved by our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI