回溯
计算机科学
安全性令牌
推论
树(集合论)
人工智能
编码(集合论)
航程(航空)
动作(物理)
语言模型
认知科学
理论计算机科学
心理学
程序设计语言
计算机安全
数学
数学分析
材料科学
物理
集合(抽象数据类型)
量子力学
复合材料
作者
Shunyu Yao,Dian Yu,Jeffrey Zhao,Izhak Shafran,Thomas L. Griffiths,Yuan Cao,Karthik Narasimhan
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:191
标识
DOI:10.48550/arxiv.2305.10601
摘要
Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%. Code repo with all prompts: https://github.com/princeton-nlp/tree-of-thought-llm.
科研通智能强力驱动
Strongly Powered by AbleSci AI