Large Language Models are Zero-Shot Reasoners

弹丸 任务(项目管理) 水准点(测量) 计算机科学 零(语言学) 人工智能 自然语言处理 认知心理学 语言学 心理学 工程类 地理 化学 哲学 有机化学 大地测量学 系统工程
作者
Takeshi Kojima,Shixiang Gu,Machel Reid,Yutaka Matsuo,Yusuke Iwasawa
出处
期刊:Cornell University - arXiv 被引量:804
标识
DOI:10.48550/arxiv.2205.11916
摘要

Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding "Let's think step by step" before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
2秒前
SciGPT应助yy采纳,获得10
2秒前
sujiali发布了新的文献求助10
3秒前
4秒前
FLASH发布了新的文献求助10
5秒前
6秒前
7秒前
顾矜应助李白白白采纳,获得10
7秒前
lrid完成签到,获得积分10
9秒前
10秒前
ao发布了新的文献求助10
10秒前
浮游应助草木采纳,获得10
11秒前
陈杰发布了新的文献求助10
12秒前
Criminology34应助宋佳荟采纳,获得10
13秒前
CipherSage应助的卢小马采纳,获得10
13秒前
dddnnn发布了新的文献求助10
13秒前
活泼的石头完成签到,获得积分10
14秒前
可爱的函函应助发文必过采纳,获得10
15秒前
15秒前
魔幻的心情完成签到,获得积分10
16秒前
李明完成签到,获得积分10
17秒前
18秒前
19秒前
19秒前
na发布了新的文献求助10
20秒前
Baili发布了新的文献求助10
20秒前
周文丽发布了新的文献求助10
21秒前
22秒前
22秒前
123完成签到,获得积分20
23秒前
yzq完成签到 ,获得积分10
23秒前
dddnnn完成签到,获得积分10
23秒前
25秒前
26秒前
26秒前
鹤轩完成签到,获得积分20
27秒前
小马甲应助一汪无前采纳,获得10
27秒前
27秒前
三腔二囊管完成签到,获得积分10
27秒前
29秒前
高分求助中
Comprehensive Toxicology Fourth Edition 24000
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
LRZ Gitlab附件(3D Matching of TerraSAR-X Derived Ground Control Points to Mobile Mapping Data 附件) 2000
World Nuclear Fuel Report: Global Scenarios for Demand and Supply Availability 2025-2040 800
The Social Work Ethics Casebook(2nd,Frederic G. R) 600
Lloyd's Register of Shipping's Approach to the Control of Incidents of Brittle Fracture in Ship Structures 500
AASHTO LRFD Bridge Design Specifications (10th Edition) with 2025 Errata 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 内科学 生物化学 物理 计算机科学 纳米技术 遗传学 基因 复合材料 化学工程 物理化学 病理 催化作用 免疫学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 5125011
求助须知:如何正确求助?哪些是违规求助? 4329012
关于积分的说明 13489539
捐赠科研通 4163648
什么是DOI,文献DOI怎么找? 2282463
邀请新用户注册赠送积分活动 1283623
关于科研通互助平台的介绍 1222905