Deep-Reinforcement Learning Multiple Access for Heterogeneous Wireless Networks

计算机科学 强化学习 阿罗哈 吞吐量 时分多址 计算机网络 无线网络 无线 分布式计算 人工智能 电信
作者
Yonghong Yu,Taotao Wang,Soung Chang Liew
出处
期刊:IEEE Journal on Selected Areas in Communications [Institute of Electrical and Electronics Engineers]
卷期号:37 (6): 1277-1290 被引量:229
标识
DOI:10.1109/jsac.2019.2904329
摘要

This paper investigates a deep reinforcement learning (DRL)-based MAC protocol for heterogeneous wireless networking, referred to as a Deep-reinforcement Learning Multiple Access (DLMA). Specifically, we consider the scenario of a number of networks operating different MAC protocols trying to access the time slots of a common wireless medium. A key challenge in our problem formulation is that we assume our DLMA network does not know the operating principles of the MACs of the other networks-i.e., DLMA does not know how the other MACs make decisions on when to transmit and when not to. The goal of DLMA is to be able to learn an optimal channel access strategy to achieve a certain pre-specified global objective. Possible objectives include maximizing the sum throughput and maximizing α-fairness among all networks. The underpinning learning process of DLMA is based on DRL. With proper definitions of the state space, action space, and rewards in DRL, we show that DLMA can easily maximize the sum throughput by judiciously selecting certain time slots to transmit. Maximizing general α-fairness, however, is beyond the means of the conventional reinforcement learning (RL) framework. We put forth a new multi-dimensional RL framework that enables DLMA to maximize general α-fairness. Our extensive simulation results show that DLMA can maximize sum throughput or achieve proportional fairness (two special classes of α-fairness) when coexisting with TDMA and ALOHA MAC protocols without knowing they are TDMA or ALOHA. Importantly, we show the merit of incorporating the use of neural networks into the RL framework (i.e., why DRL and not just traditional RL): specifically, the use of DRL allows DLMA (i) to learn the optimal strategy with much faster speed and (ii) to be more robust in that it can still learn a near-optimal strategy even when the parameters in the RL framework are not optimally set.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
缓慢白山关注了科研通微信公众号
1秒前
打打应助张YI采纳,获得10
1秒前
2秒前
kickflip发布了新的文献求助10
2秒前
永诚boyhu502完成签到,获得积分10
2秒前
balala发布了新的文献求助10
2秒前
4秒前
ll2925203发布了新的文献求助10
5秒前
xyzlancet发布了新的文献求助10
5秒前
5秒前
5秒前
6秒前
今后应助哈哈哈哈采纳,获得10
7秒前
m123完成签到,获得积分10
8秒前
感谢须野转发科研通微信,获得积分50
9秒前
Hello应助小白采纳,获得10
10秒前
10秒前
山橘月发布了新的文献求助10
10秒前
mcf6662发布了新的文献求助10
10秒前
跳跃仙人掌应助wxx771510625采纳,获得20
10秒前
10秒前
11秒前
12秒前
Akim应助谦让的慕凝采纳,获得10
12秒前
Owen应助安沐采纳,获得10
12秒前
FashionBoy应助虚幻初阳采纳,获得10
13秒前
乐荷完成签到,获得积分10
13秒前
14秒前
夭夭完成签到 ,获得积分10
15秒前
静推氯化钾完成签到,获得积分10
16秒前
ll2925203完成签到,获得积分10
17秒前
17秒前
感谢激情的冰绿转发科研通微信,获得积分50
17秒前
17秒前
Waris发布了新的文献求助10
18秒前
望北楼主发布了新的文献求助10
18秒前
大个应助dicy1232003采纳,获得10
18秒前
皮念寒完成签到,获得积分10
19秒前
JamesPei应助单薄电话采纳,获得10
19秒前
高分求助中
One Man Talking: Selected Essays of Shao Xunmei, 1929–1939 1000
A Chronicle of Small Beer: The Memoirs of Nan Green 1000
From Rural China to the Ivy League: Reminiscences of Transformations in Modern Chinese History 900
Migration and Wellbeing: Towards a More Inclusive World 900
Eric Dunning and the Sociology of Sport 850
Operative Techniques in Pediatric Orthopaedic Surgery 510
The Making of Détente: Eastern Europe and Western Europe in the Cold War, 1965-75 500
热门求助领域 (近24小时)
化学 医学 材料科学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 免疫学 细胞生物学 电极
热门帖子
关注 科研通微信公众号,转发送积分 2912454
求助须知:如何正确求助?哪些是违规求助? 2547620
关于积分的说明 6895505
捐赠科研通 2212361
什么是DOI,文献DOI怎么找? 1175622
版权声明 588174
科研通“疑难数据库(出版商)”最低求助积分说明 575791