计算机科学
强化学习
调度(生产过程)
有向无环图
分布式计算
地铁列车时刻表
固定优先级先发制人调度
并行计算
动态优先级调度
人工智能
单调速率调度
算法
操作系统
数学优化
数学
作者
Julius Roeder,Andy D. Pimentel,Clemens Grelck
出处
期刊:IFIP advances in information and communication technology
日期:2023-01-01
卷期号:: 121-134
被引量:1
标识
DOI:10.1007/978-3-031-34107-6_10
摘要
Applications in various fields such as embedded systems or High-Performance-Computing are often represented as Directed Acyclic Graphs (DAG), also known as taskgraphs. DAGs represent the data flow between tasks in an application and can be used for scheduling. When scheduling taskgraphs, a scheduler needs to decide when and on which core each task is executed, while minimising the runtime of the schedule. This paper explores offline scheduling of dependent tasks using a Reinforcement Learning (RL) approach. We propose two RL schedulers, one using a Fully Connected Network (FCN) and another one using a Graph Convolutional Network (GCN). First, we detail the different components of our two RL schedulers and illustrate how they schedule a task. Then, we compare our RL schedulers to a Forward List Scheduling (FLS) approach based on two different datasets. We demonstrate that our GCN-based scheduler produces schedules that are as good or better than the schedules produced by the FLS approach in over 85% of the cases for a dataset with small taskgraphs. The same scheduler performs very similar to the FLS scheduler (at most 5% degradation) in almost 76% of the cases for a more challenging dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI