Deep-Reinforcement Learning Multiple Access for Heterogeneous Wireless Networks

计算机科学 强化学习 阿罗哈 吞吐量 时分多址 计算机网络 无线网络 无线 分布式计算 人工智能 电信
作者
Yonghong Yu,Taotao Wang,Soung Chang Liew
出处
期刊:IEEE Journal on Selected Areas in Communications [Institute of Electrical and Electronics Engineers]
卷期号:37 (6): 1277-1290 被引量:229
标识
DOI:10.1109/jsac.2019.2904329
摘要

This paper investigates a deep reinforcement learning (DRL)-based MAC protocol for heterogeneous wireless networking, referred to as a Deep-reinforcement Learning Multiple Access (DLMA). Specifically, we consider the scenario of a number of networks operating different MAC protocols trying to access the time slots of a common wireless medium. A key challenge in our problem formulation is that we assume our DLMA network does not know the operating principles of the MACs of the other networks-i.e., DLMA does not know how the other MACs make decisions on when to transmit and when not to. The goal of DLMA is to be able to learn an optimal channel access strategy to achieve a certain pre-specified global objective. Possible objectives include maximizing the sum throughput and maximizing α-fairness among all networks. The underpinning learning process of DLMA is based on DRL. With proper definitions of the state space, action space, and rewards in DRL, we show that DLMA can easily maximize the sum throughput by judiciously selecting certain time slots to transmit. Maximizing general α-fairness, however, is beyond the means of the conventional reinforcement learning (RL) framework. We put forth a new multi-dimensional RL framework that enables DLMA to maximize general α-fairness. Our extensive simulation results show that DLMA can maximize sum throughput or achieve proportional fairness (two special classes of α-fairness) when coexisting with TDMA and ALOHA MAC protocols without knowing they are TDMA or ALOHA. Importantly, we show the merit of incorporating the use of neural networks into the RL framework (i.e., why DRL and not just traditional RL): specifically, the use of DRL allows DLMA (i) to learn the optimal strategy with much faster speed and (ii) to be more robust in that it can still learn a near-optimal strategy even when the parameters in the RL framework are not optimally set.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Thor发布了新的文献求助20
刚刚
刚刚
可爱的函函应助如意闭月采纳,获得10
1秒前
1秒前
ptalala发布了新的文献求助10
1秒前
whywhy完成签到,获得积分10
1秒前
1秒前
liyuhua发布了新的文献求助10
1秒前
慕子默发布了新的文献求助10
2秒前
科研小能手完成签到,获得积分10
2秒前
3秒前
4秒前
4秒前
燮大帅完成签到,获得积分10
4秒前
情怀应助AA采纳,获得10
5秒前
泰裤辣完成签到,获得积分10
6秒前
搜集达人应助孟孟采纳,获得10
6秒前
量子星尘发布了新的文献求助10
7秒前
tomatototo应助白衣采纳,获得30
8秒前
青柠发布了新的文献求助10
9秒前
Sure发布了新的文献求助10
9秒前
zx完成签到,获得积分20
10秒前
10秒前
xiaoxie完成签到 ,获得积分10
10秒前
共享精神应助肖菜菜采纳,获得10
10秒前
10秒前
小不点完成签到,获得积分10
10秒前
11秒前
天真的万声完成签到,获得积分10
11秒前
李艾关注了科研通微信公众号
11秒前
来杯赤野完成签到,获得积分10
12秒前
SciGPT应助小张z采纳,获得10
12秒前
hhhh发布了新的文献求助10
13秒前
科研通AI2S应助哈哈采纳,获得10
13秒前
慕子默完成签到,获得积分10
13秒前
闪闪的硬币完成签到 ,获得积分10
13秒前
14秒前
15秒前
Ariels完成签到,获得积分10
15秒前
Owen应助Aurora采纳,获得10
15秒前
高分求助中
Production Logging: Theoretical and Interpretive Elements 2700
Neuromuscular and Electrodiagnostic Medicine Board Review 1000
Statistical Methods for the Social Sciences, Global Edition, 6th edition 600
こんなに痛いのにどうして「なんでもない」と医者にいわれてしまうのでしょうか 510
Walter Gilbert: Selected Works 500
An Annotated Checklist of Dinosaur Species by Continent 500
岡本唐貴自伝的回想画集 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3663305
求助须知:如何正确求助?哪些是违规求助? 3223962
关于积分的说明 9754101
捐赠科研通 2933829
什么是DOI,文献DOI怎么找? 1606430
邀请新用户注册赠送积分活动 758489
科研通“疑难数据库(出版商)”最低求助积分说明 734809