嵌入
计算机科学
变压器
绳子
杠杆(统计)
安全性令牌
人工智能
算法
工程类
电气工程
计算机安全
电压
作者
Jianlin Su,Murtadha Ahmed,Yu Lu,Shengfeng Pan,Bo Wen,Yunfeng Liu
出处
期刊:Neurocomputing
[Elsevier]
日期:2024-02-01
卷期号:568: 127063-127063
被引量:56
标识
DOI:10.1016/j.neucom.2023.127063
摘要
Position encoding has recently been shown to be effective in transformer architecture. It enables valuable supervision for dependency modeling between elements at different positions of the sequence. In this paper, we first investigate various methods to integrate positional information into the learning process of transformer-based language models. Then, we propose a novel method named Rotary Position Embedding (RoPE) to effectively leverage the positional information. Specifically, the proposed RoPE encodes the absolute position with a rotation matrix and meanwhile incorporates the explicit relative position dependency in the self-attention formulation. Notably, RoPE enables valuable properties, including the flexibility of sequence length, decaying inter-token dependency with increasing relative distances, and the capability of equipping linear self-attention with relative position encoding. Finally, we evaluate the enhanced transformer with rotary position embedding, also called RoFormer, on various long text classification benchmark datasets. Our experiments show that it consistently overcomes its alternatives. Furthermore, we provide a theoretical analysis to explain some experimental results. RoFormer is already integrated into Huggingface: https://huggingface.co/docs/transformers/model_doc/roformer.
科研通智能强力驱动
Strongly Powered by AbleSci AI