计算机科学
分布式计算
动态优先级调度
调度(生产过程)
作业调度程序
大数据
SPARK(编程语言)
公平份额计划
单调速率调度
划分问题
固定优先级先发制人调度
分拆(数论)
并行计算
数学优化
排队
数据挖掘
计算机网络
服务质量
数学
程序设计语言
组合数学
作者
SenXing Lu,Mingming Zhao,Chunlin Li,Quanbing Du,Yingwei Luo
标识
DOI:10.1093/comjnl/bxad017
摘要
Abstract The Spark computing framework provides an efficient solution to address the major requirements of big data processing, but data partitioning and job scheduling in the Spark framework are the two major bottlenecks that limit Spark’s performance. In the Spark Shuffle phase, the data skewing problem caused by unbalanced data partitioning leads to the problem of increased job completion time. In response to the above problems, a balanced partitioning strategy for intermediate data is proposed in this article, which considers the characteristics of intermediate data, establishes a data skewing model and proposes a dynamic partitioning algorithm. In Spark heterogeneous clusters, because of the differences in node performance and task requirements, the default task scheduling algorithm cannot complete scheduling efficiently, which leads to low system task processing efficiency. In order to deal with the above problems, an efficient job scheduling strategy is proposed in this article, which integrates node performance and task requirements, and proposes a task scheduling algorithm using greedy strategy. The experimental results prove that the dynamic partitioning algorithm for intermediate data proposed in this article effectively alleviates the problem that data skew leads to the decrease of system task processing efficiency and shortens the overall task completion time. The efficient job scheduling strategy proposed in this article can efficiently complete the job scheduling tasks under heterogeneous clusters, allocate jobs to nodes in a balanced manner, decrease the overall job completion time and increase the system resource utilization.
科研通智能强力驱动
Strongly Powered by AbleSci AI