强化学习
钢筋
计算机科学
多智能体系统
数学教育
心理学
人工智能
社会心理学
作者
Jian Zhao,Xunhan Hu,Mingyu Yang,Wengang Zhou,Jiangcheng Zhu,Houqiang Li
出处
期刊:IEEE transactions on games
[Institute of Electrical and Electronics Engineers]
日期:2024-03-01
卷期号:16 (1): 140-150
被引量:5
标识
DOI:10.1109/tg.2022.3232390
摘要
Due to the partial observability and communication constraints in many multiagent reinforcement learning (MARL) tasks, centralized training with decentralized execution (CTDE) has become one of the most widely used MARL paradigms. In CTDE, centralized information is dedicated to learning the allocation of the team reward with a mixing network while the learning of individual Q -values is usually based on local observations. The insufficient utility of global observation will degrade performance in challenging environments. To this end, this work proposes a novel Centralized Teacher with a Decentralized Student (CTDS) framework, which consists of a teacher model and a student model. Specifically, the teacher model allocates the team reward by learning individual Q -values conditioned on global observation while the student model utilizes the partial observations to approximate the Q -values estimated by the teacher model. In this way, CTDS balances the full utilization of global observation during training and the feasibility of decentralized execution for online inference. Our CTDS framework is generic, which is ready to be applied upon existing CTDE methods to boost their performance. We conduct experiments on a challenging set of StarCraft II micromanagement tasks to test the effectiveness of our method and the results show that CTDS outperforms the existing value-based MARL methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI