嵌入
强化学习
计算机科学
一般化
最大化
动作(物理)
接头(建筑物)
人工智能
功能(生物学)
数学优化
机器学习
数学
生物
进化生物学
物理
工程类
数学分析
建筑工程
量子力学
作者
Xingzhou Lou,Junge Zhang,Yali Du,Chao Yu,Zhaofeng He,Kaiqi Huang
标识
DOI:10.1109/tg.2023.3302694
摘要
State-of-the-art multi-agent policy gradient (MAPG) methods have demonstrated convincing capability in many cooperative games. However, the exponentially growing joint-action space severely challenges the critic's value evaluation and hinders performance of MAPG methods. To address this issue, we augment Central-Q policy gradient with a joint-action embedding function and propose Mutual-information Maximization MAPG (M3APG). The joint-action embedding function makes joint-actions contain information of state transitions, which will improve the critic's generalization over the joint-action space by allowing it to infer joint-actions' outcomes. We theoretically prove that with a fixed joint-action embedding function, the convergence of M3APG is guaranteed. Experiment results on the StarCraft Multi-Agent Challenge (SMAC) demonstrate that M3APG gives evaluation results with better accuracy and outperform other MAPG basic models across various maps of multiple difficulty levels. We empirically show that our joint-action embedding model can be extended to value-based multi-agent reinforcement learning methods and state-of-the-art MAPG methods. Finally, we run ablation study to show that the usage of mutual information in our method is necessary and effective.
科研通智能强力驱动
Strongly Powered by AbleSci AI