连锁
透明度(行为)
计算机科学
可控性
模块化设计
范围(计算机科学)
集合(抽象数据类型)
正向链接
人机交互
分布式计算
人工智能
程序设计语言
计算机安全
专家系统
心理学
心理治疗师
数学
应用数学
作者
Tongshuang Wu,Michael Terry,Carrie J. Cai
标识
DOI:10.1145/3491102.3517582
摘要
Although large language models (LLMs) have demonstrated impressive potential on simple tasks, their breadth of scope, lack of transparency, and insufficient controllability can make them less effective when assisting humans on more complex tasks. In response, we introduce the concept of Chaining LLM steps together, where the output of one step becomes the input for the next, thus aggregating the gains per step. We first define a set of LLM primitive operations useful for Chain construction, then present an interactive system where users can modify these Chains, along with their intermediate results, in a modular way. In a 20-person user study, we found that Chaining not only improved the quality of task outcomes, but also significantly enhanced system transparency, controllability, and sense of collaboration. Additionally, we saw that users developed new ways of interacting with LLMs through Chains: they leveraged sub-tasks to calibrate model expectations, compared and contrasted alternative strategies by observing parallel downstream effects, and debugged unexpected model outputs by "unit-testing" sub-components of a Chain. In two case studies, we further explore how LLM Chains may be used in future applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI