初始化
二次增长
计算机科学
字节
变压器
稀疏矩阵
序列(生物学)
建筑
并行计算
算法
理论计算机科学
人工智能
计算机硬件
程序设计语言
电气工程
工程类
生物
物理
艺术
视觉艺术
高斯分布
电压
量子力学
遗传学
作者
Rewon Child,Scott Gray,Alec Radford,Ilya Sutskever
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:591
标识
DOI:10.48550/arxiv.1904.10509
摘要
Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to $O(n \sqrt{n})$. We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.
科研通智能强力驱动
Strongly Powered by AbleSci AI