已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing

计算机科学 培训(气象学) 采样(信号处理) 质量(理念) 人工智能 布线(电子设计自动化) 深度学习 机器学习 计算机网络 地理 电信 探测器 认识论 哲学 气象学
作者
Conglong Li,Zhewei Yao,Xiaoxia Wu,Minjia Zhang,Connor Holmes,Cheng Li,Yuxiong He
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence [Association for the Advancement of Artificial Intelligence (AAAI)]
卷期号:38 (16): 18490-18498 被引量:5
标识
DOI:10.1609/aaai.v38i16.29810
摘要

Recent advances on deep learning models come at the price of formidable training cost. The increasing model size is one of the root causes, but another less-emphasized fact is that data scale is actually increasing at a similar speed as model scale, and the training cost is proportional to both of them. Compared to the rapidly evolving model architecture, how to efficiently use the training data (especially for the expensive foundation model pretraining) is both less explored and difficult to realize due to the lack of a convenient framework that focus on data efficiency capabilities. To this end, we present DeepSpeed Data Efficiency, a framework that makes better use of data, increases training efficiency, and improves model quality. Specifically, we propose and combine two data efficiency techniques: efficient data sampling via a general curriculum learning library, and efficient data routing via a novel random layerwise token dropping technique. For GPT-3 1.3B language model pretraining, our work achieves 12.5x less data/time/cost ($3.7K if rent on Azure), while still maintaining 95% of model quality compared to baseline with full data and cost ($46.3K). For GPT-3 1.3B and BERT-large pretraining, our work can also achieve the same model quality with up to 2x less data/time/cost, or achieve better model quality under same data/time/cost. DeepSpeed Data Efficiency is easy to use and tune, enabling us to easily apply it and verify its benefit on additional tasks including GPT-3 MoE model pretraining and small-scale GPT-2/ViT finetuning.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
lvzhechen完成签到,获得积分10
1秒前
2秒前
3秒前
周周发布了新的文献求助10
4秒前
4秒前
荔枝草莓酱完成签到,获得积分10
5秒前
7秒前
海韵_Tony发布了新的文献求助10
7秒前
大模型应助Percy采纳,获得10
8秒前
鲤鱼诗桃发布了新的文献求助10
9秒前
9秒前
10秒前
zly完成签到,获得积分10
10秒前
活力的以蕊完成签到,获得积分10
10秒前
tang发布了新的文献求助10
11秒前
姜姜发布了新的文献求助10
13秒前
华仔应助ms采纳,获得10
14秒前
老演员发布了新的文献求助10
14秒前
缥缈南露发布了新的文献求助10
14秒前
CipherSage应助parpate采纳,获得10
15秒前
nenoaowu发布了新的文献求助10
15秒前
无花果应助科研通管家采纳,获得10
18秒前
香蕉觅云应助科研通管家采纳,获得10
18秒前
礼礼应助科研通管家采纳,获得30
18秒前
搜集达人应助科研通管家采纳,获得10
19秒前
礼礼应助科研通管家采纳,获得10
19秒前
19秒前
科研通AI2S应助科研通管家采纳,获得10
19秒前
19秒前
19秒前
19秒前
桐桐应助科研通管家采纳,获得10
19秒前
19秒前
19秒前
20秒前
可爱的函函应助缥缈南露采纳,获得10
20秒前
Zsx完成签到,获得积分10
21秒前
深情安青应助nenoaowu采纳,获得10
21秒前
共享精神应助萧水白采纳,获得100
22秒前
22秒前
高分求助中
Mantiden: Faszinierende Lauerjäger Faszinierende Lauerjäger Heßler, Claudia, Rud 1000
PraxisRatgeber: Mantiden: Faszinierende Lauerjäger 1000
Natural History of Mantodea 螳螂的自然史 1000
A Photographic Guide to Mantis of China 常见螳螂野外识别手册 800
Autoregulatory progressive resistance exercise: linear versus a velocity-based flexible model 500
Spatial Political Economy: Uneven Development and the Production of Nature in Chile 400
Research on managing groups and teams 300
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 细胞生物学 免疫学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3330233
求助须知:如何正确求助?哪些是违规求助? 2959835
关于积分的说明 8597237
捐赠科研通 2638343
什么是DOI,文献DOI怎么找? 1444230
科研通“疑难数据库(出版商)”最低求助积分说明 669078
邀请新用户注册赠送积分活动 656624