计算机科学
强化学习
服务质量
马尔可夫决策过程
虚拟网络
分布式计算
布线(电子设计自动化)
可扩展性
计算机网络
马尔可夫过程
人工智能
数学
数据库
统计
作者
Nan He,Song Yang,Fan Li,Stojan Trajanovski,Fernando Kuipers,Xiaoming Fu
标识
DOI:10.1109/iwqos52092.2021.9521285
摘要
The efficacy of Network Function Virtualization (NFV) depends critically on (1) where the virtual network functions (VNFs) are placed and (2) how the traffic is routed. Unfortunately, these aspects are not easily optimized, especially under time-varying network states with different quality of service (QoS) requirements. Given the importance of NFV, many approaches have been proposed to solve the VNF placement and traffic routing problem. However, those prior approaches mainly assume that the state of the network is static and known, disregarding real-time network variations. To bridge that gap, in this paper, we formulate the VNF placement and traffic routing problem as a Markov Decision Process model to capture the dynamic network state transitions. In order to jointly minimize the delay and cost of NFV providers and maximize the revenue, we devise a customized Deep Reinforcement Learning (DRL) algorithm, called A-DDPG, for VNF placement and traffic routing in a real-time network. A-DDPG uses the attention mechanism to ascertain smooth network behavior within the general framework of network utility maximization (NUM). The simulation results show that A-DDPG outperforms the state-of-the-art in terms of network utility, delay, and cost.
科研通智能强力驱动
Strongly Powered by AbleSci AI