分级(工程)
心理学
评价方法
生成语法
数学教育
教育学
人工智能
计算机科学
土木工程
工程类
可靠性工程
作者
Elizabeth L. Wetzler,Kenneth S. Cassidy,Margaret J. Jones,Chelsea R. Frazier,Nickalous Korbut,Chelsea M. Sims,Shari S. Bowen,Michael B. Wood
标识
DOI:10.1177/00986283241282696
摘要
Background Generative artificial intelligence (AI) represents a potentially powerful, time-saving tool for grading student essays. However, little is known about how AI-generated essay scores compare to human instructor scores. Objective The purpose of this study was to compare the essay grading scores produced by AI with those of human instructors to explore similarities and differences. Method Eight human instructors and two versions of OpenAI's ChatGPT (3.5 and 4o) independently graded 186 deidentified student essays from an introductory psychology course using a detailed rubric. Scoring consistency was analyzed using Bland-Altman and regression analyses. Results AI scores for ChatGPT3.5 were, on average, higher than human scores, although average scores for ChatGPT 4o and human scores were more similar. Notably, AI grading for both versions was more lenient than human instructors at lower performance levels and stricter at higher levels, reflecting proportional bias. Conclusion Although AI may offer potential for supporting grading processes, the pattern of results suggests that AI and human instructors differ in how they score using the same rubric. Teaching Implications Results suggest that educators should be aware that AI grading of psychology writing assignments that require reflection or critical thinking may differ markedly from scores generated by human instructors.
科研通智能强力驱动
Strongly Powered by AbleSci AI