转化式学习
透视图(图形)
领域(数学)
心理学
标准化
数据科学
计算机科学
发展心理学
人工智能
数学
纯数学
操作系统
作者
Dorottya Demszky,Diyi Yang,David S. Yeager,Christopher J. Bryan,Margarett Clapper,Susannah Chandhok,Johannes C. Eichstaedt,Cameron A. Hecht,Jeremy P. Jamieson,Meghann Johnson,Michaela Jones,Danielle Krettek-Cobb,Leslie C. Lai,Nirel JonesMitchell,Desmond C. Ong,Carol S. Dweck,James J. Gross,James W. Pennebaker
标识
DOI:10.1038/s44159-023-00241-5
摘要
Large language models (LLMs), such as OpenAI's GPT-4, Google's Bard or Meta's LLaMa, have created unprecedented opportunities for analysing and generating language data on a massive scale. Because language data have a central role in all areas of psychology, this new technology has the potential to transform the field. In this Perspective, we review the foundations of LLMs. We then explain how the way that LLMs are constructed enables them to effectively generate human-like linguistic output without the ability to think or feel like a human. We argue that although LLMs have the potential to advance psychological measurement, experimentation and practice, they are not yet ready for many of the most transformative psychological applications — but further research and development may enable such use. Next, we examine four major concerns about the application of LLMs to psychology, and how each might be overcome. Finally, we conclude with recommendations for investments that could help to address these concerns: field-initiated 'keystone' datasets; increased standardization of performance benchmarks; and shared computing and analysis infrastructure to ensure that the future of LLM-powered research is equitable. Large language models (LLMs), which can generate and score text in human-like ways, have the potential to advance psychological measurement, experimentation and practice. In this Perspective, Demszky and colleagues describe how LLMs work, concerns about using them for psychological purposes, and how these concerns might be addressed.
科研通智能强力驱动
Strongly Powered by AbleSci AI