期刊:IEEE transactions on games [Institute of Electrical and Electronics Engineers] 日期:2023-08-07卷期号:16 (2): 470-482被引量:2
标识
DOI:10.1109/tg.2023.3302694
摘要
State-of-the-art multi-agent policy gradient (MAPG) methods have demonstrated convincing capability in many cooperative games. However, the exponentially growing joint-action space severely challenges the critic's value evaluation and hinders performance of MAPG methods. To address this issue, we augment Central-Q policy gradient with a joint-action embedding function and propose Mutual-information Maximization MAPG (M3APG). The joint-action embedding function makes joint-actions contain information of state transitions, which will improve the critic's generalization over the joint-action space by allowing it to infer joint-actions' outcomes. We theoretically prove that with a fixed joint-action embedding function, the convergence of M3APG is guaranteed. Experiment results on the StarCraft Multi-Agent Challenge (SMAC) demonstrate that M3APG gives evaluation results with better accuracy and outperform other MAPG basic models across various maps of multiple difficulty levels. We empirically show that our joint-action embedding model can be extended to value-based multi-agent reinforcement learning methods and state-of-the-art MAPG methods. Finally, we run ablation study to show that the usage of mutual information in our method is necessary and effective.