Multi-Agent Reinforcement Learning With Decentralized Distribution Correction

强化学习 计算机科学 钢筋 多智能体系统 分散系统 人工智能 分布式计算 工程类 控制(管理) 结构工程
作者
Kuo Li,Qing‐Shan Jia
出处
期刊:IEEE Transactions on Automation Science and Engineering [Institute of Electrical and Electronics Engineers]
卷期号:: 1-13
标识
DOI:10.1109/tase.2024.3369592
摘要

This work considers decentralized multi-agent reinforcement learning (MARL), where the global states and rewards are assumed to be fully observable, while the local behavior policy is preserved locally for resisting adversarial attack. In order to cooperatively accumulate more rewards, the agents exchange messages among a time-varying communication network to reach consensus. For these cooperative tasks, we propose a decentralized actor-critic algorithm, where the agents make individual decisions, but the joint behavior policy is optimized towards more cumulative rewards. We provide the theoretical analysis towards the convergence under the tabular setting and then expand it to nonlinear function approximations. Furthermore, by incorporating decentralized distribution correction, the agents are trained in an off-policy manner for higher sample efficiency. Finally, we conduct experiments to evaluate the algorithms, where the proposed algorithm performs competitively in both stability and asymptotic performance. Note to Practitioners —Fully decentralized MARL algorithms are widely applied in multi-agent systems for generating cooperative behaviors, e.g., multiple unmanned aerial vehicles (UAV) cooperatively performing search and rescue tasks, multiple vehicles efficiently passing a crowded intersection, and multiple robots cooperatively handling cargo or obstacles. Focusing on these potential applications, this work is motivated to improve the sample efficiency of recent decentralized MARL algorithms by incorporating off-policy training approaches. In this work, we reweight historical trajectories via a decentralized average consensus step and develop corresponding policy-optimization procedures, with which previous trajectories could be used to stabilize later iterations. Since the training materials are augmented by historical samples, the sample efficiency is significantly improved, and the training process is stabilized. With the fully decentralized training approach, the proposed algorithms are expected to be applied in large-scale systems, e.g., vehicle teams and UAV groups, for effective real-time control.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
dream完成签到 ,获得积分10
3秒前
唐唐发布了新的文献求助10
3秒前
史克珍香完成签到 ,获得积分10
9秒前
晓风完成签到,获得积分10
12秒前
CR完成签到 ,获得积分10
13秒前
mammer应助超帅无色采纳,获得10
14秒前
科研通AI2S应助科研通管家采纳,获得10
14秒前
所所应助科研通管家采纳,获得10
15秒前
Owen应助科研通管家采纳,获得10
15秒前
15秒前
lilylwy完成签到 ,获得积分0
15秒前
li完成签到 ,获得积分10
15秒前
可爱的函函应助唐唐采纳,获得10
20秒前
小石头完成签到,获得积分10
22秒前
量子星尘发布了新的文献求助10
26秒前
xiaoxiaoxingchen完成签到 ,获得积分10
26秒前
laohu完成签到,获得积分10
27秒前
风格完成签到,获得积分10
27秒前
大橙子发布了新的文献求助150
29秒前
八点必起完成签到,获得积分10
30秒前
sduweiyu完成签到 ,获得积分10
31秒前
hyf完成签到 ,获得积分10
32秒前
aldehyde应助芊芊要发SCI采纳,获得10
33秒前
Twinkle完成签到,获得积分10
35秒前
Eureka完成签到,获得积分10
37秒前
41秒前
浮熙完成签到 ,获得积分10
48秒前
笔芯完成签到,获得积分10
51秒前
看文献完成签到,获得积分0
53秒前
爱与感谢完成签到 ,获得积分10
55秒前
华仔应助大橙子采纳,获得10
56秒前
小帅完成签到,获得积分10
56秒前
man完成签到 ,获得积分10
57秒前
biofresh完成签到,获得积分10
59秒前
平凡完成签到,获得积分10
1分钟前
1分钟前
哈利波特完成签到,获得积分10
1分钟前
菓小柒完成签到 ,获得积分10
1分钟前
basil完成签到,获得积分10
1分钟前
高分求助中
【提示信息,请勿应助】关于scihub 10000
Les Mantodea de Guyane: Insecta, Polyneoptera [The Mantids of French Guiana] 3000
徐淮辽南地区新元古代叠层石及生物地层 3000
The Mother of All Tableaux: Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 3000
Handbook of Industrial Diamonds.Vol2 1100
Global Eyelash Assessment scale (GEA) 1000
Picture Books with Same-sex Parented Families: Unintentional Censorship 550
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4038157
求助须知:如何正确求助?哪些是违规求助? 3575869
关于积分的说明 11373842
捐赠科研通 3305650
什么是DOI,文献DOI怎么找? 1819255
邀请新用户注册赠送积分活动 892655
科研通“疑难数据库(出版商)”最低求助积分说明 815022