偏爱
问答
群(周期表)
心理学
计算机科学
人工智能
数学教育
社会心理学
情报检索
统计
数学
化学
有机化学
作者
Francesco Walker,Matteo Favetta,Linde Hasker,Richard Walker
标识
DOI:10.1145/3613905.3650955
摘要
We investigated student trust in ChatGPT. A multiple choice questionnaire was administered to 171 students. For each question, they chose between one answer from ChatGPT and one from a human expert. Half the answers from ChatGPT and half the answers from the human expert were correct, the other half incorrect. One group saw answers labeled by source. A second group saw unlabeled answers. Participants selected more correct than incorrect answers and showed no preference for incorrect AI answers over correct human answers. We infer that they did not overtrust ChatGPT. However, while the unlabeled group preferred correct AI answers to incorrect human answers, the labeled group did not. We infer that this group undertrusted the technology, probably because of pro-human bias. While we should not underestimate the dangers of overtrust, undertrust may also be a significant issue, depriving students of valuable opportunities.
科研通智能强力驱动
Strongly Powered by AbleSci AI