计算机科学
上传
回程(电信)
强化学习
马尔可夫决策过程
计算机网络
基站
隐藏物
人气
算法
马尔可夫过程
人工智能
操作系统
社会心理学
统计
数学
心理学
出处
期刊:IEEE Internet of Things Journal
[Institute of Electrical and Electronics Engineers]
日期:2023-06-01
卷期号:10 (11): 9585-9596
被引量:3
标识
DOI:10.1109/jiot.2023.3235661
摘要
With the explosive growth of content request services in the vehicle network, there is an urgent need to speed up the response process of content requests and reduce the backhaul burden on base stations. However, most traditional content caching strategies only consider the content popularity or cluster-based caching strategies individually, and the access paths are fixed. This paper proposes a collaborative caching strategy for reinforcement learning (RL) based content downloading. Specifically, the vehicles are first clustered by the K-means algorithm, and the content transmission distance is reduced by caching the content with high popularity in the cluster head. Then, according to the historical content request information, the long short-term memory is used to predict the popularity of content. The content with high popularity will be collaboratively cached in the base station and cluster heads. Finally, the content downloading problem can be described as a Markov decision process, using a deep reinforcement learning algorithm, Deep Q Network (DQN), to solve the target problem which is to minimize the weighted cost, including the downloading delay and failure cost. With the DQN algorithm, the cluster head can make the access decision for the content request. The proposed collaborative caching strategy for the RL-based content downloading algorithm can greatly reduce the response process and the burden at the base station. The simulation results show that the proposed RL-based method achieved outstanding performance to improve the access hit ratio and reduce the content downloading delay.
科研通智能强力驱动
Strongly Powered by AbleSci AI