亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Raptor-T: A Fused and Memory-Efficient Sparse Transformer for Long and Variable-Length Sequences

变量(数学) 变压器 计算机科学 算法 数学 电气工程 工程类 电压 数学分析
作者
H. Wang,Donglin Yang,Yaqi Xia,Zheng Zhang,Qigang Wang,Jianping Fan,Xiaobo Zhou,Dazhao Cheng
出处
期刊:IEEE Transactions on Computers [Institute of Electrical and Electronics Engineers]
卷期号:73 (7): 1852-1865
标识
DOI:10.1109/tc.2024.3389507
摘要

Transformer-based models have made significant advancements across various domains, largely due to the self-attention mechanism's ability to capture contextual relationships in input sequences. However, processing long sequences remains computationally expensive for Transformer models, primarily due to the O ( n 2 ) complexity associated with self-attention. To address this, sparse attention has been proposed to reduce the quadratic dependency to linear. Nevertheless, deploying the sparse transformer efficiently encounters two major obstacles: 1) Existing system optimizations are less effective for the sparse transformer due to the algorithm's approximation properties leading to fragmented attention, and 2) the variability of input sequences results in computation and memory access inefficiencies. We present Raptor-T, a cutting-edge transformer framework designed for handling long and variable-length sequences. Raptor-T harnesses the power of the sparse transformer to reduce resource requirements for processing long sequences while also implementing system-level optimizations to accelerate inference performance. To address the fragmented attention issue, Raptor-T employs fused and memory-efficient Multi-Head Attention. Additionally, we introduce an asynchronous data processing method to mitigate GPU-blocking operations caused by sparse attention. Furthermore, Raptor-T minimizes padding for variable-length inputs, effectively reducing the overhead associated with padding and achieving balanced computation on GPUs. In evaluation, we compare Raptor-T's performance against state-of-the-art frameworks on an NVIDIA A100 GPU. The experimental results demonstrate that Raptor-T outperforms FlashAttention-2 and FasterTransformer, achieving an impressive average end-to-end performance improvement of 3.41X and 3.71X, respectively.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
李嘉图发布了新的文献求助10
2秒前
13秒前
17秒前
李嘉图完成签到,获得积分10
24秒前
25秒前
YY完成签到,获得积分20
28秒前
JXC发布了新的文献求助10
29秒前
30秒前
YY发布了新的文献求助10
37秒前
39秒前
fantw发布了新的文献求助60
42秒前
所所应助优秀的大有采纳,获得10
50秒前
康康完成签到,获得积分10
51秒前
ZY关闭了ZY文献求助
1分钟前
fantw完成签到,获得积分10
1分钟前
1分钟前
1分钟前
Owen应助科研通管家采纳,获得10
1分钟前
领导范儿应助优秀的大有采纳,获得10
1分钟前
优秀的大有完成签到,获得积分10
2分钟前
上官若男应助章鱼采纳,获得10
2分钟前
2分钟前
科研通AI2S应助任震宇采纳,获得10
3分钟前
3分钟前
knoren发布了新的文献求助10
3分钟前
Lin.隽发布了新的文献求助20
3分钟前
3分钟前
ZY发布了新的文献求助10
3分钟前
3分钟前
ZY完成签到,获得积分10
3分钟前
abull完成签到,获得积分10
4分钟前
4分钟前
Lin.隽完成签到,获得积分10
4分钟前
abull发布了新的文献求助10
4分钟前
JXC完成签到,获得积分10
5分钟前
athena发布了新的文献求助10
5分钟前
清秀的怀蕊完成签到 ,获得积分10
5分钟前
knoren发布了新的文献求助10
5分钟前
可爱的函函应助knoren采纳,获得10
6分钟前
6分钟前
高分求助中
The Oxford Handbook of Social Cognition (Second Edition, 2024) 1050
Kinetics of the Esterification Between 2-[(4-hydroxybutoxy)carbonyl] Benzoic Acid with 1,4-Butanediol: Tetrabutyl Orthotitanate as Catalyst 1000
The Young builders of New china : the visit of the delegation of the WFDY to the Chinese People's Republic 1000
Rechtsphilosophie 1000
Handbook of Qualitative Cross-Cultural Research Methods 600
Chen Hansheng: China’s Last Romantic Revolutionary 500
Mantiden: Faszinierende Lauerjäger Faszinierende Lauerjäger 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3139573
求助须知:如何正确求助?哪些是违规求助? 2790430
关于积分的说明 7795297
捐赠科研通 2446910
什么是DOI,文献DOI怎么找? 1301487
科研通“疑难数据库(出版商)”最低求助积分说明 626238
版权声明 601146