工作量
计算机科学
微服务
可扩展性
杠杆(统计)
GSM演进的增强数据速率
边缘计算
分布式计算
跟踪(心理语言学)
服务器
实时计算
计算机网络
云计算
数据库
操作系统
人工智能
语言学
哲学
作者
Ke Cheng,Sheng Zhang,Chenghong Tu,Xiaohang Shi,Zhaoheng Yin,Sanglu Lu,Yu Liang,Qing Gu
出处
期刊:IEEE Transactions on Parallel and Distributed Systems
[Institute of Electrical and Electronics Engineers]
日期:2023-04-01
卷期号:34 (4): 1294-1312
被引量:9
标识
DOI:10.1109/tpds.2023.3238429
摘要
Deploying microservice instances on the edge device close to end users can provide on-site processing thus reducing request response time. Each microservice has multiple instances that can process requests in parallel. To achieve high processing efficiency, the number of these instances is scaled according to the workload, which is also known as autoscaling. Previous studies of microservice autoscaling in the edge computing environment lack in-depth consideration of time-varying workload, they assume that the workload of each microservice always depends on that of its upstream. However, through an analysis of Alibaba's microservice trace with hundreds of millions of records, we find that the assumption is impractical thus hurting autoscaling effectiveness. To solve this problem, we propose ProScale, a prediction-driven proactive autoscaling framework for microservices at the edge. ProScale proactively forecasts the workload for each individual microservice per timeslot. Then it utilizes an efficient online algorithm to leverage the predicting results to determine the instance number for each microservice jointly with making placement decisions. For each microservice instance deployed on the edge device, ProScale handles burst requests using a designed offloading strategy. In addition, ProScale can also balance the load for multiple instances of each microservice. Extensive trace-driven experiments show that ProScale has great scalability. It can reduce average response time by 96.7% and resource usage by 96.5% compared with existing strategies and designed baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI