Automating Academic Assessment: A Large Language Model Approach
计算机科学
程序设计语言
语言模型
自然语言处理
软件工程
作者
Chatchai Wangwiwattana,Yuwaree Tongvivat
标识
DOI:10.1109/incit60207.2023.10412991
摘要
In educational settings, providing timely and quality feedback can significantly enhance student engagement and learning outcomes. However, this task becomes increasingly challenging in larger classrooms. To address this issue, this study introduces a method that leverages Large Language Models (LLMs) — already proven useful in text generation and summarization — for automatically assessing students' short-answer responses. Demonstrating effectiveness across a variety of use-cases such as answer matching, keyword extraction, and clustering, this approach achieves an impressive 99.03% accuracy rate. More than just an automated grading tool, the method also offers the capability to generate tailored, real-time feedback, thus enhancing the efficiency of teachers' evaluation processes. The study further provides suggestions for effectively utilizing LLMs in student assessment tasks.