读写能力
心理学
度量(数据仓库)
计算机科学
教育学
数据挖掘
作者
Thomas K. F. Chiu,Yifan Chen,King Woon Yau,Ching Sing Chai,Helen Meng,Irwin King,Savio W.H. Wong,Yeung Yam
标识
DOI:10.1016/j.caeai.2024.100282
摘要
The majority of AI literacy studies have designed and developed self-reported questionnaires to assess AI learning and understanding. These studies assessed students' perceived AI capability rather than AI literacy because self-perceptions are seldom an accurate account of true measures. International assessment programs that use objective measures to assess science, mathematical, digital, and computational literacy back up this argument. Furthermore, because AI education research is still in its infancy, the current definition of AI literacy in the literature may not meet the needs of young students. Therefore, this study aims to develop and validate an AI literacy test for school students within the interdisciplinary project known as AI4future. Engineering and education researchers created and selected 25 multiple-choice questions to accomplish this goal, and school teachers validated them while developing an AI curriculum for middle schools. 2,390 students in grades 7 to 9 took the test. We used a Rasch model to investigate the discrimination, reliability, and validity of the items. The results showed that the model met the unidimensionality assumption and demonstrated a set of reliable and valid items. They indicate the quality of the test items. The test enables AI education researchers and practitioners to appropriately evaluate their AI-related education interventions.
科研通智能强力驱动
Strongly Powered by AbleSci AI