印为红字的
同行反馈
感知
计算机科学
反馈调节
心理学
数学教育
神经科学
作者
Erkan Er,Gökhan Akçapınar,Mohammad Khalil,Omid Noroozi,Seyyed Kazem Banihashem
摘要
Abstract Despite the growing research interest in the use of large language models for feedback provision, it still remains unknown how students perceive and use AI‐generated feedback compared to instructor feedback in authentic settings. To address this gap, this study compared instructor and AI‐generated feedback in a Java programming course through an experimental research design where students were randomly assigned to either condition. Both feedback providers used the same assessment rubric, and students were asked to improve their work based on the feedback. The feedback perceptions scale and students' laboratory assignment scores were compared in both conditions. Results showed that students perceived instructor feedback as significantly more useful than AI feedback. While instructor feedback was also perceived as more fair, developmental and encouraging, these differences were not statistically significant. Importantly, students receiving instructor feedback showed significantly greater improvements in their lab scores compared to those receiving AI feedback, even after controlling for their initial knowledge levels. Based on the findings, we posit that AI models potentially need to be trained on data specific to educational contexts and hybrid feedback models that combine AI's and instructors' strengths should be considered for effective feedback practices. Practitioner notes What is already known about this topic Feedback is crucial for student learning in programming education. Providing detailed personalised feedback is challenging for instructors. AI‐powered solutions like ChatGPT can be effective in feedback provision. Existing research is limited and shows mixed results about AI‐generated feedback. What this paper adds The effectiveness of AI‐generated feedback was compared to instructor feedback. Both feedback types received positive perceptions, but instructor feedback was seen as more useful. Instructor feedback led to greater score improvements in the programming task. Implications for practice and/or policy AI should not be the sole source of feedback, as human expertise is crucial. AI models should be trained on context‐specific data to improve feedback actionability. Hybrid feedback models should be considered for a scalable and effective approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI