工作流程
计算机科学
云计算
有向无环图
机器学习
人工智能
数据挖掘
调度(生产过程)
图形
分布式计算
理论计算机科学
数据库
算法
操作系统
运营管理
经济
作者
Jixiang Yu,Ming Gao,Yuchan Li,Zehui Zhang,W.H. Ip,Kai Leung Yung
标识
DOI:10.1016/j.jii.2022.100337
摘要
With the rapid growth of cloud computing, efficient operational optimization and resource scheduling of complex cloud business processes rely on real-time and accurate performance prediction. Previous research on cloud computing performance prediction focused on qualitative (heuristic rules), model-driven, or coarse-grained time-series prediction, which ignore the study of historical performance, resource allocation status and service sequence relationships of workflow services. There are even fewer studies on prediction for workflow graph data due to the lack of available public datasets. In this study, from Alibaba Cloud's Cluster-trace-v2018, we extract nearly one billion offline task instance records into a new dataset, which contains approximately one million workflows and their corresponding directed acyclic graph (DAG) matrices. We propose a novel workflow performance prediction model (DAG-Transformer) to address the aforementioned challenges. In DAG-Transformer, we design a customized position encoding matrix and an attention mask for workflows, which can make full use of workflow sequential and graph relations to improve the embedding representation and perception ability of the deep neural network. The experiments validate the necessity of integrating graph-structure information in workflow prediction. Compared with mainstream deep learning (DL) methods and several classic machine learning (ML) algorithms, the accuracy of DAG-Transformer is the highest. DAG-Transformer can achieve 85-92% CPU prediction accuracy and 94-98% memory prediction accuracy, while maintaining high efficiency and low overheads. This study establishes a new paradigm and baseline for workflow performance prediction and provides a new way for facilitating workflow scheduling.
科研通智能强力驱动
Strongly Powered by AbleSci AI