强化学习
计算机科学
杠杆(统计)
利用
资源配置
分布式计算
频道(广播)
资源管理(计算)
信道分配方案
人工智能
蜂窝网络
计算机网络
无线
电信
计算机安全
作者
Anitha Saravana Kumar,Lian Zhao,Xavier Fernando
出处
期刊:IEEE Transactions on Vehicular Technology
[Institute of Electrical and Electronics Engineers]
日期:2021-12-13
卷期号:71 (2): 1726-1736
被引量:33
标识
DOI:10.1109/tvt.2021.3134272
摘要
Channel allocation has a direct and profound impact on the performance of vehicle-to-everything (V2X) networks. Considering the dynamic nature of vehicular environments, it is appealing to devise a blended strategy to perform effective resource sharing. In this paper, we exploit deep learning techniques predict vehicles’ mobility patterns. Then we propose an architecture consisting of centralized decision making and distributed channel allocation to maximize the spectrum efficiency of all vehicles involved. To achieve this, we leverage two deep reinforcement learning techniques, namely deep Q-network (DQN) and advantage actor-critic (A2C) techniques. In addition, given the time varying nature of the user mobility, we further incorporate the long short-term memory (LSTM) into DQN and A2C techniques. The combined system tracks user mobility, varying demands and channel conditions and adapt resource allocation dynamically. We verify the performance of the proposed methods through extensive simulations and prove the effectiveness of the proposed LSTM-DQN and LSTM-A2C algorithms using real data obtained from California state transportation department.
科研通智能强力驱动
Strongly Powered by AbleSci AI