强化学习
计算机科学
学习迁移
可扩展性
钢筋
人工智能
心理学
社会心理学
数据库
作者
Bin Chen,Zehong Cao,Quan Bai
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-15
标识
DOI:10.1109/tnnls.2024.3387397
摘要
It is challenging to train an efficient learning procedure with multiagent reinforcement learning (MARL) when the number of agents increases as the observation space exponentially expands, especially in large-scale multiagent systems. In this article, we proposed a scalable attentive transfer framework (SATF) for efficient MARL, which achieved goals faster and more accurately in homogeneous and heterogeneous combat tasks by transferring learned knowledge from a small number of agents (4) to a large number of agents (up to 64). To reduce and align the dimensionality of the observed state variations caused by increasing numbers of agents, the proposed SATF deployed a novel state representation network with a self-attention mechanism, known as dynamic observation representation network (DorNet), to extract the dominant observed information with excellent cost-effectiveness. The experiments on the MAgent platform showed that the SATF outperformed the distributed MARL (independent Q-learning (IQL) and A2C) in task sequences from 8 to 64 agents. The experiments on StarCraft II showed that the SATF demonstrated superior performance relative to the centralized training with decentralized execution MARL (QMIX) by presenting shorter training steps, achieving a desired win rate of up to approximately 90% when increasing the number of agents from 4 to 32. The findings of our study showed the great potential for enhancing the efficiency of MARL training in large-scale agent combat missions.
科研通智能强力驱动
Strongly Powered by AbleSci AI