串联(数学)
边距(机器学习)
计算机科学
自然语言理解
词(群论)
语言模型
自然语言处理
自然语言
人工智能
机器学习
算术
数学
几何学
作者
Xiao Liu,Yanan Zheng,Zhengxiao Du,Ming Ding,Yujie Qian,Zhilin Yang,Jie Tang
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:143
标识
DOI:10.48550/arxiv.2103.10385
摘要
Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU). However, our preliminary study reveals that manual discrete prompts often lead to unstable performance -- e.g., changing a single word in the prompt might result in substantial performance drop. We propose a novel method P-Tuning that employs trainable continuous prompt embeddings in concatenation with discrete prompts. Empirically, P-Tuning not only stabilizes training by minimizing the gap between various discrete prompts, but also improves performance by a sizeable margin on a wide range of NLU tasks including LAMA and SuperGLUE. P-Tuning is generally effective for both frozen and tuned language models, under both the fully-supervised and few-shot settings.
科研通智能强力驱动
Strongly Powered by AbleSci AI