计算机科学
变压器
建筑
机器翻译
钥匙(锁)
人工智能
图层(电子)
机器学习
人机交互
自然语言处理
艺术
物理
电压
视觉艺术
化学
有机化学
计算机安全
量子力学
作者
Jean-Baptiste Cordonnier,Andreas Loukas,Martin Jaggi
出处
期刊:Cornell University - arXiv
日期:2020-01-01
被引量:4
标识
DOI:10.48550/arxiv.2006.16362
摘要
Attention layers are widely used in natural language processing (NLP) and are beginning to influence computer vision architectures. Training very large transformer models allowed significant improvement in both fields, but once trained, these networks show symptoms of over-parameterization. For instance, it is known that many attention heads can be pruned without impacting accuracy. This work aims to enhance current understanding on how multiple heads interact. Motivated by the observation that attention heads learn redundant key/query projections, we propose a collaborative multi-head attention layer that enables heads to learn shared projections. Our scheme decreases the number of parameters in an attention layer and can be used as a drop-in replacement in any transformer architecture. Our experiments confirm that sharing key/query dimensions can be exploited in language understanding, machine translation and vision. We also show that it is possible to re-parametrize a pre-trained multi-head attention layer into our collaborative attention layer. Collaborative multi-head attention reduces the size of the key and query projections by 4 for same accuracy and speed. Our code is public.
科研通智能强力驱动
Strongly Powered by AbleSci AI