On-Policy vs. Off-Policy Reinforcement Learning for Multi-Domain SFC Embedding in SDN/NFV-Enabled Networks

计算机科学 强化学习 马尔可夫决策过程 启发式 服务质量 分布式计算 马尔可夫过程 人工智能 计算机网络 数学 统计
作者
Donghao Zhao,Weisong Shi,Yu Lü,Xi Li,Yicen Liu
出处
期刊:IEEE Access [Institute of Electrical and Electronics Engineers]
卷期号:12: 123049-123070
标识
DOI:10.1109/access.2024.3430865
摘要

In the software defined network (SDN)/network function virtualization (NFV)-enabled networks, service function chains (SFCs) should typically be allocated to deploy these services, which not only entails meeting the service's Quality of Service (QoS) requirements, but also considering the infrastructure's limitations. Although this issue has received much attention in the literature, the dynamics, intricacy, complexity and unpredictability of the issue provide several difficulties for researchers and engineers. The traditional methods (e.g., exact, heuristic, meta-heuristic, and game, etc.) are subjected to the complexity of multi-domain cloud network scenarios with dynamic network states, high-speed computational requirements, and enormous service requests. Recent studies have shown that reinforcement learning (RL) is a promising way to deal with the limitations of the traditional methods. On-policy and off-policy are two key categories in the field of RL models, and they both have promising advantages in deal with dynamic resource allocation problems. This paper contains two innovative points at two levels. Firstly, in order to deal with SFC embedding problem in dynamic multi-domain networks, a mixed Markov model combining Markov decision process (MDP) and hidden Markov model (HMM) is constructed, and the corresponding RL model-solving algorithms are proposed. Secondly, in order to distinguish the appropriate model in a given network scenario, the on-policy RL based multiple domain SFC embedding algorithm is compared with the off-policy one. The obtained simulation results show that the proposed RL algorithms can outperform the current baselines in terms of delay, load balancing and response time. Furthermore, we also point out that the off-policy based algorithm is more suitable for small-scale dynamic network scenarios, while the on-policy based algorithm is more suitable for medium to large-scale network scenarios with high convergence requirements.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
jingyu应助朝阳采纳,获得10
刚刚
在水一方应助云落采纳,获得10
刚刚
刚刚
wangye完成签到,获得积分10
1秒前
星辰大海应助lala采纳,获得10
1秒前
222发布了新的文献求助10
1秒前
维多利亚少年完成签到,获得积分10
2秒前
Mango发布了新的文献求助10
2秒前
慕青应助牵墨采纳,获得10
2秒前
情怀应助hudaodao采纳,获得30
2秒前
3秒前
dxdy完成签到,获得积分10
3秒前
3秒前
学术小白发布了新的文献求助10
4秒前
可爱的函函应助帆帆帆采纳,获得10
4秒前
林齐完成签到 ,获得积分10
5秒前
烟花应助lu采纳,获得10
6秒前
rgaerva发布了新的文献求助30
6秒前
小小付完成签到,获得积分20
6秒前
罗杰发布了新的文献求助10
7秒前
时尚铁身完成签到 ,获得积分10
7秒前
领导范儿应助youwenjing11采纳,获得30
7秒前
含糊的夜阑完成签到,获得积分10
8秒前
潘果果完成签到,获得积分10
9秒前
SW完成签到,获得积分10
9秒前
9秒前
彭于晏应助Gc采纳,获得10
9秒前
10秒前
zzz完成签到,获得积分10
10秒前
FashionBoy应助勤劳的鞋垫采纳,获得10
10秒前
赵赵完成签到,获得积分10
12秒前
13秒前
隐形曼青应助刻苦冰颜采纳,获得10
13秒前
可乐应助@金采纳,获得10
13秒前
14秒前
14秒前
你你你完成签到,获得积分20
15秒前
15秒前
帆帆帆发布了新的文献求助10
16秒前
16秒前
高分求助中
Evolution 10000
Sustainability in Tides Chemistry 2800
The Young builders of New china : the visit of the delegation of the WFDY to the Chinese People's Republic 1000
юрские динозавры восточного забайкалья 800
A technique for the measurement of attitudes 500
A new approach of magnetic circular dichroism to the electronic state analysis of intact photosynthetic pigments 500
Diagnostic immunohistochemistry : theranostic and genomic applications 6th Edition 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3148736
求助须知:如何正确求助?哪些是违规求助? 2799755
关于积分的说明 7836820
捐赠科研通 2457225
什么是DOI,文献DOI怎么找? 1307810
科研通“疑难数据库(出版商)”最低求助积分说明 628276
版权声明 601663