加速
利用
计算机科学
瓶颈
计算
变压器
软件部署
计算机工程
二次方程
一套
并行计算
算法
嵌入式系统
软件工程
电压
工程类
计算机安全
历史
几何学
电气工程
数学
考古
作者
Liu Liu,Zheng Qu,Zhaodong Chen,Yufei Ding,Yuan Xie
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:10
标识
DOI:10.48550/arxiv.2110.11299
摘要
Transformers are the mainstream of NLP applications and are becoming increasingly popular in other domains such as Computer Vision. Despite the improvements in model quality, the enormous computation costs make Transformers difficult at deployment, especially when the sequence length is large in emerging applications. Processing attention mechanism as the essential component of Transformer is the bottleneck of execution due to the quadratic complexity. Prior art explores sparse patterns in attention to support long sequence modeling, but those pieces of work are on static or fixed patterns. We demonstrate that the sparse patterns are dynamic, depending on input sequences. Thus, we propose the Dynamic Sparse Attention (DSA) that can efficiently exploit the dynamic sparsity in the attention of Transformers. Compared with other methods, our approach can achieve better trade-offs between accuracy and model complexity. Moving forward, we identify challenges and provide solutions to implement DSA on existing hardware (GPUs) and specialized hardware in order to achieve practical speedup and efficiency improvements for Transformer execution.
科研通智能强力驱动
Strongly Powered by AbleSci AI