众包
可靠性
复制
公民科学
数据科学
心理学
计算机科学
在线研究方法
社会研究
应用心理学
万维网
社会科学
社会学
政治学
统计
植物
数学
法学
生物
作者
Xin Qin,Mingpeng Huang,Jie Ding
标识
DOI:10.31234/osf.io/xkd23
摘要
Artificial intelligence, especially large language models (LLMs), has been widely used for scientific research. Yet, few studies have explored their potential to advance social science research. This research evaluates how effectively ChatGPT can mimic responses from real human participants on online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk), Prolific, and CloudResearch. We replicated 22 studies published in top psychology journals between January 2023 and June 2023. Since ChatGPT 4.0’s cutoff date is September 2021, its training database does not include articles published after that time. The current research is among the first to use ChatGPT to replicate social science studies whose conclusions have not been included in the training database of ChatGPT. This unique methodology strengthens the credibility of our findings and establishes a more robust foundation for applying AI in simulating human behavior. The results show that ChatGPT successfully replicates about 93.2% (20.5/22) of the findings from these studies. While conducting these studies (assuming each study is a typical 5-minute online experiment with 300 participants) on online crowdsourcing platforms could take approximately 11 days and cost around $3,960, using AI through a platform we term “AITurk” could reduce the time to about 11 minutes and the cost to $132. That is, AITurk achieves about 93.2% accuracy of real human participants’ responses on online crowdsourcing platforms, with about 1/1440 time and 1/30 cost. Based on these findings, we suggest that ChatGPT can be an effective tool for social science research, especially for conducting preliminary research and evaluating the replicability of existing studies.
科研通智能强力驱动
Strongly Powered by AbleSci AI