已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Scalable Heterogeneous Scheduling Based Model Parallelism for Real-Time Inference of Large-Scale Deep Neural Networks

计算机科学 可扩展性 推论 并行计算 调度(生产过程) 人工神经网络 比例(比率) 数据并行性 平行性(语法) 人工智能 分布式计算 数据库 数学 数学优化 物理 量子力学
作者
Xiaofeng Zou,Cen Chen,Pei-Yu Lin,L. L. Zhang,Yanwu Xu,Wenjie Zhang
出处
期刊:IEEE transactions on emerging topics in computational intelligence [Institute of Electrical and Electronics Engineers]
卷期号:8 (4): 2962-2973
标识
DOI:10.1109/tetci.2024.3369628
摘要

Scaling up the capacity of deep neural networks (DNN) is one of the effective approaches to improve the model quality for several different DNN-based applications, making the DNN models continuously grow. To promote the execution efficiency of large and complex models, the devices are becoming increasingly heterogeneous with CPUs and domain-specific hardware accelerators. In many cases, the capacity of large-scale models is beyond the memory limit of a single accelerator. Recent work has shown that model parallelism, which aims to partition a DNN's computational graph on multiple devices, can not only address this problem while also provide significant performance improvements. In this work, we focus on optimizing model parallelism for timely inference of large-scale DNNs on heterogeneous processors. We transform the computation graphs of DNNs into directed acyclic graphs (DAGs) and propose to utilize heterogeneous scheduling methods to determine the model partition plan. Nevertheless, we have found that current efficient DAG scheduling methods have a lot of room for improvement to process large-scale DAGs and have high computation complexity. To this end, we propose a scalable DAG partition assisted scheduling method for heterogeneous processors to address these problems. Our approach takes the execution time of DNN models, high scalability, and memory constraints into consideration. We demonstrate the effectiveness of our approaches using both small- and large-scale DNN models. To the best of our knowledge, it is the first work that explores DAG scheduling and partitioning methods for model parallelism, and provides new avenues for accelerating large-scale DNN inference.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
MM11111完成签到 ,获得积分10
1秒前
NexusExplorer应助dyh采纳,获得10
5秒前
谢朝邦完成签到 ,获得积分10
7秒前
大个应助笑点低紊采纳,获得10
11秒前
shuang完成签到 ,获得积分10
12秒前
14秒前
19秒前
火星人完成签到 ,获得积分10
20秒前
小马甲应助谨慎从露采纳,获得10
22秒前
小马甲应助科研进化中采纳,获得10
22秒前
笑点低紊发布了新的文献求助10
23秒前
28秒前
笑点低紊完成签到,获得积分10
29秒前
机智的孤兰完成签到 ,获得积分10
32秒前
谨慎从露发布了新的文献求助10
33秒前
希望天下0贩的0应助郭燥采纳,获得10
38秒前
碧蓝的大有完成签到,获得积分10
46秒前
潇洒的马里奥完成签到,获得积分10
47秒前
1分钟前
旭旭完成签到 ,获得积分10
1分钟前
1分钟前
守一完成签到,获得积分10
1分钟前
Akim应助王星星采纳,获得10
1分钟前
小二郎应助117采纳,获得10
1分钟前
1分钟前
江应怜完成签到 ,获得积分10
1分钟前
清澈水眸发布了新的文献求助10
1分钟前
哲000完成签到 ,获得积分10
1分钟前
1分钟前
1分钟前
邓邓完成签到 ,获得积分10
1分钟前
鄢懋卿应助认真努力发SCI采纳,获得20
1分钟前
1分钟前
117发布了新的文献求助10
1分钟前
Hello应助清澈水眸采纳,获得10
1分钟前
luster完成签到 ,获得积分10
1分钟前
乌龟完成签到,获得积分10
1分钟前
Skye完成签到 ,获得积分10
1分钟前
linkman发布了新的文献求助10
1分钟前
闫伯涵发布了新的文献求助30
1分钟前
高分求助中
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
A new approach to the extrapolation of accelerated life test data 1000
Cognitive Neuroscience: The Biology of the Mind 1000
Technical Brochure TB 814: LPIT applications in HV gas insulated switchgear 1000
Immigrant Incorporation in East Asian Democracies 500
Nucleophilic substitution in azasydnone-modified dinitroanisoles 500
不知道标题是什么 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3965493
求助须知:如何正确求助?哪些是违规求助? 3510811
关于积分的说明 11155140
捐赠科研通 3245287
什么是DOI,文献DOI怎么找? 1792783
邀请新用户注册赠送积分活动 874096
科研通“疑难数据库(出版商)”最低求助积分说明 804176