Proactive Auto-Scaling Technique for Web Applications in Container-Based Edge Computing Using Federated Learning Model

计算机科学 云计算 供应 边缘计算 边缘设备 分布式计算 调度(生产过程) GSM演进的增强数据速率 容器(类型理论) 计算机网络 人工智能 操作系统 机械工程 运营管理 工程类 经济
作者
Javad Dogani,Farshad Khunjush
出处
期刊:Journal of Parallel and Distributed Computing [Elsevier]
卷期号:187: 104837-104837 被引量:1
标识
DOI:10.1016/j.jpdc.2024.104837
摘要

Edge computing has emerged as an attractive alternative to traditional cloud computing by utilizing processing, network, and storage resources close to end devices, such as Internet of Things (IoT) sensors. Edge computing is still in its infancy, and resource provisioning and service scheduling remain research concerns. Kubernetes is a container orchestration tool in distributed environments. Proactive auto-scaling techniques in Kubernetes improve utilization by allocating resources based on future workload prediction. However, prediction models run on central cloud nodes, necessitating data transfer between edge and cloud nodes, which increases latency and response time. We present FedAvg-BiGRU, a proactive auto-scaling method in edge computing based on FedAvg and multi-step prediction by a Bidirectional Gated Recurrent Unit (BiGRU). FedAvg is a technique for training machine learning models in a Federated Learning (FL) model. FL reduces network traffic by exchanging only model updates rather than raw data, relieving learning models of the need to store data on a centralized cloud server. In addition, a technique for determining the number of Kubernetes pods based on the Cool Down Time (CDT) concept has been developed, preventing contradictory scaling actions. To our knowledge, our work is the first to employ FL for proactive auto-scaling in cloud and edge computing. The results demonstrate that the FedAvg-BiGRU method has a slightly higher prediction error than the load centralized processing mode, although this difference is not statistically significant. At the same time, it reduces the amount of data transmission between the edge nodes and the cloud server.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
张玉玲发布了新的文献求助20
刚刚
地平完成签到,获得积分10
1秒前
2秒前
2秒前
善学以致用应助细心飞鸟采纳,获得10
2秒前
2秒前
hjkluo完成签到,获得积分20
3秒前
情怀应助SRsora采纳,获得10
4秒前
小马甲应助594612采纳,获得10
4秒前
可爱玫瑰完成签到,获得积分10
4秒前
4秒前
曾经豌豆完成签到,获得积分10
4秒前
111完成签到,获得积分10
5秒前
在水一方应助啊倦采纳,获得10
5秒前
6秒前
6秒前
君无邪完成签到,获得积分10
7秒前
yy发布了新的文献求助10
7秒前
YTT完成签到,获得积分10
7秒前
刘璇1发布了新的文献求助10
7秒前
Kyrie完成签到 ,获得积分10
7秒前
8秒前
LHTTT发布了新的文献求助10
9秒前
9秒前
英姑应助多边形采纳,获得10
9秒前
9秒前
小羊不吃天堂草关注了科研通微信公众号
10秒前
10秒前
YTT发布了新的文献求助10
10秒前
狂野的冰棍完成签到,获得积分10
10秒前
由怜雪发布了新的文献求助10
11秒前
残忆发布了新的文献求助10
11秒前
Jack应助yqzl采纳,获得10
11秒前
可爱的函函应助hh采纳,获得10
12秒前
小宋发布了新的文献求助10
12秒前
12秒前
学术菜鸡123完成签到,获得积分10
12秒前
14秒前
14秒前
14秒前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Mechanistic Modeling of Gas-Liquid Two-Phase Flow in Pipes 2500
Structural Load Modelling and Combination for Performance and Safety Evaluation 800
Conference Record, IAS Annual Meeting 1977 610
Time Matters: On Theory and Method 500
Virulence Mechanisms of Plant-Pathogenic Bacteria 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3558083
求助须知:如何正确求助?哪些是违规求助? 3133203
关于积分的说明 9401074
捐赠科研通 2833299
什么是DOI,文献DOI怎么找? 1557421
邀请新用户注册赠送积分活动 727253
科研通“疑难数据库(出版商)”最低求助积分说明 716257