项目反应理论
考试(生物学)
计算思维
计算机科学
数学教育
小学(天文学)
心理学
人工智能
心理测量学
发展心理学
古生物学
物理
天文
生物
作者
Siu Cheung Kong,Ming Lai
标识
DOI:10.1016/j.compedu.2022.104562
摘要
Although instruments for assessing students' computational thinking (CT) concepts in primary education have been developed, they have rarely been validated in terms of item response theory (IRT). We consider IRT to be a rigorous validation tool and apply it to a CT concepts test for primary education involving 13,670 students. A two-parameter logistic model was chosen over other IRT models, as it indicated an acceptable model fit and item fit. The discrimination parameters indicated that the instrument could effectively distinguish between students with various ability levels. Nominal response modelling in IRT was used to retrieve information from the students' responses, and those with a lower ability level were found to only consider one of the conditions provided, had no understanding of the repetition structure, and might have difficulties in associating a sprite with its corresponding codes. Based on ability estimates, we also found that the students' ability in terms of CT concepts increased with grades and that boys generally performed slightly better than girls. These results suggest that the instrument can be used to examine the learning achievements of students in terms of CT concepts.
科研通智能强力驱动
Strongly Powered by AbleSci AI