Can ChatGPT Kill User-Generated Q&A Platforms?
计算机科学
化学
业务
作者
Jianying Xue,Lizheng Wang,Jinyang Zheng,Yongjun Li,Yong Jie Tan
出处
期刊:Social Science Research Network [Social Science Electronic Publishing] 日期:2023-01-01被引量:1
标识
DOI:10.2139/ssrn.4448938
摘要
Large Language Models (LLMs) technology, e.g., ChatGPT, is expected to reshape a broad spectrum of domains. Among them, the impact on user-generated knowledge-sharing (Q&A) communities is of particular interest because such communities are an important learning source of LLMs, and their future changes may affect the sustainable learning of LLMs. This study examines such impact via the natural experiment of ChatGPT's launching. Safe-guided by supporting evidence of parallel trends, a difference-in-difference (DID) analysis suggests the launching trigger an average 2.64% reduction of question-asking on Stack Overflow, confirming a lower-search-cost-enabled substitution. This substitution, however, is not necessarily a threat to the sustainability of knowledge-sharing communities and hence LLMs. The saved search cost may reallocate to asking a smaller set of questions that is more engaging and of higher quality. The increased engagement per question may offset the engagement loss due to fewer questions, and the quality improvement can benefit LLMs' future learning. Our further analysis on the qualitative changes of the questions, however, doesn't favor this hope. While the questions become longer by 2.7% on average and hence more sophisticated, they are less readable and involve less cognition. Those can be questions by nature hard to understand and process by LLMs. A further mechanism analysis shows that users qualitatively adjust their questions to be longer, less readable and less cognitive. The insignificant change in score given by viewers per question also suggests no improvement in the question quality and decreased platform-wide engagement. Our heterogeneity analysis further suggests that new users are more susceptible. Taken together, our paper suggests LLMs may threaten the survival of user-generated knowledge-sharing communities, which may further threaten the sustainable learning and long-run improvement of LLMs.