计算机科学
语言模型
水准点(测量)
链条(单位)
认知科学
人工智能
词(群论)
自然语言处理
常识推理
航程(航空)
简单(哲学)
理论计算机科学
心理学
语言学
认识论
哲学
物理
材料科学
大地测量学
天文
复合材料
地理
作者
Jason Lee,Xuezhi Wang,Dale Schuurmans,Maarten Bosma,Ed H.,Quoc V. Le,Denny Zhou
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:1463
标识
DOI:10.48550/arxiv.2201.11903
摘要
We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.
科研通智能强力驱动
Strongly Powered by AbleSci AI