强化学习
人工神经网络
竞赛(生物学)
计算机科学
函数逼近
一般化
平滑的
适应性
时差学习
功能(生物学)
对偶(语法数字)
差异(会计)
人工智能
数学优化
机器学习
数学
生态学
生物
会计
文学类
数学分析
艺术
业务
进化生物学
计算机视觉
作者
Fengjiao Zhang,Jie Li,Zhi Li
标识
DOI:10.1016/j.neucom.2020.05.097
摘要
We explored the problem about function approximation error and complex mission adaptability in multi-agent deep reinforcement learning. This paper proposes a new multi-agent deep reinforcement learning algorithm framework named multi-agent time delayed deep deterministic policy gradient. Our work reduces the overestimation error of neural network approximation and variance of estimation result using dual-centered critic, group target network smoothing and delayed policy updating. According to experiment results, it improves the ability to adapt complex missions eventually. Then, we discuss that there is an inevitable overestimation issue about existing multi-agent algorithms about approximating real action-value equations with neural network. We also explain the approximate error of equations in the multi-agent deep deterministic policy gradient algorithm mathematically and experimentally. Finally, the application of our algorithm in the mixed cooperative competition experimental environment further demonstrates the effectiveness and generalization of our algorithm, especially improving the group’s ability of adapting complex missions and completing more difficult missions.
科研通智能强力驱动
Strongly Powered by AbleSci AI