ProScale: Proactive Autoscaling for Microservice With Time-Varying Workload at the Edge

工作量 计算机科学 微服务 可扩展性 杠杆(统计) GSM演进的增强数据速率 边缘计算 分布式计算 跟踪(心理语言学) 服务器 实时计算 计算机网络 云计算 数据库 操作系统 人工智能 语言学 哲学
作者
Ke Cheng,Sheng Zhang,Chenghong Tu,Xiaohang Shi,Zhaoheng Yin,Sanglu Lu,Yu Liang,Qing Gu
出处
期刊:IEEE Transactions on Parallel and Distributed Systems [Institute of Electrical and Electronics Engineers]
卷期号:34 (4): 1294-1312 被引量:9
标识
DOI:10.1109/tpds.2023.3238429
摘要

Deploying microservice instances on the edge device close to end users can provide on-site processing thus reducing request response time. Each microservice has multiple instances that can process requests in parallel. To achieve high processing efficiency, the number of these instances is scaled according to the workload, which is also known as autoscaling. Previous studies of microservice autoscaling in the edge computing environment lack in-depth consideration of time-varying workload, they assume that the workload of each microservice always depends on that of its upstream. However, through an analysis of Alibaba's microservice trace with hundreds of millions of records, we find that the assumption is impractical thus hurting autoscaling effectiveness. To solve this problem, we propose ProScale, a prediction-driven proactive autoscaling framework for microservices at the edge. ProScale proactively forecasts the workload for each individual microservice per timeslot. Then it utilizes an efficient online algorithm to leverage the predicting results to determine the instance number for each microservice jointly with making placement decisions. For each microservice instance deployed on the edge device, ProScale handles burst requests using a designed offloading strategy. In addition, ProScale can also balance the load for multiple instances of each microservice. Extensive trace-driven experiments show that ProScale has great scalability. It can reduce average response time by 96.7% and resource usage by 96.5% compared with existing strategies and designed baselines.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
小白完成签到,获得积分10
1秒前
不会游泳发布了新的文献求助30
1秒前
2秒前
卡皮巴拉发布了新的文献求助10
2秒前
CipherSage应助3301采纳,获得10
2秒前
857566完成签到,获得积分10
2秒前
调皮以南发布了新的文献求助10
3秒前
斯文败类应助bofu采纳,获得10
3秒前
binfo发布了新的文献求助10
3秒前
sss发布了新的文献求助10
3秒前
3秒前
MMM发布了新的文献求助10
3秒前
科研通AI2S应助weiquanfei采纳,获得10
3秒前
Joel发布了新的文献求助10
4秒前
4秒前
Cyx完成签到,获得积分20
5秒前
李健应助hhc采纳,获得10
5秒前
6秒前
小小米发布了新的文献求助10
6秒前
科研通AI5应助key采纳,获得10
6秒前
可一发布了新的文献求助10
6秒前
6秒前
6秒前
李爱国应助CathyChau采纳,获得10
7秒前
爱吃冰淇淋的皇甫元青完成签到,获得积分10
7秒前
可可发布了新的文献求助10
7秒前
7秒前
7秒前
8秒前
科研通AI5应助科研通管家采纳,获得10
8秒前
搜集达人应助科研通管家采纳,获得10
8秒前
orixero应助科研通管家采纳,获得10
8秒前
Ava应助科研通管家采纳,获得10
8秒前
乐乐应助科研通管家采纳,获得10
8秒前
小蘑菇应助seven采纳,获得10
8秒前
9秒前
赘婿应助科研通管家采纳,获得10
9秒前
大脑袋应助科研通管家采纳,获得20
9秒前
科研通AI5应助科研通管家采纳,获得10
9秒前
研友_VZG7GZ应助科研通管家采纳,获得10
9秒前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Mechanistic Modeling of Gas-Liquid Two-Phase Flow in Pipes 2500
Structural Load Modelling and Combination for Performance and Safety Evaluation 1000
Conference Record, IAS Annual Meeting 1977 610
Time Matters: On Theory and Method 500
Virulence Mechanisms of Plant-Pathogenic Bacteria 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3558575
求助须知:如何正确求助?哪些是违规求助? 3133479
关于积分的说明 9402337
捐赠科研通 2833494
什么是DOI,文献DOI怎么找? 1557565
邀请新用户注册赠送积分活动 727509
科研通“疑难数据库(出版商)”最低求助积分说明 716330