项目反应理论
经典测试理论
考试(生物学)
检验理论
计算机化自适应测验
克朗巴赫阿尔法
可靠性(半导体)
特征(语言学)
钥匙(锁)
心理学
人工智能
计算机科学
心理测量学
功率(物理)
发展心理学
语言学
哲学
物理
生物
古生物学
量子力学
计算机安全
作者
Simon Zegota,Tim Becker,York Hagmayer,Tobias Raupach
标识
DOI:10.1080/0142159x.2022.2077716
摘要
Background Validation of examinations is usually based on classical test theory. In this study, we analysed a key feature examination according to item response theory and compared the results with those of a classical test theory approach.Methods Over the course of five years, 805 fourth-year undergraduate students took a key feature examination on general medicine consisting of 30 items. Analyses were run according to a classical test theory approach as well as using item response theory. Classical test theory analyses are reported as item difficulty, discriminatory power, and Cronbach’s alpha while item response theory analyses are presented as item characteristics curves, item information curves and a test information function.Results According to classical test theory findings, the examination was labelled as easy. Analyses according to item response theory more specifically indicated that the examination was most suited to identify struggling students. Furthermore, the analysis allowed for adapting the examination to specific ability ranges by removing items, as well as comparing multiple samples with varying ability ranges.Conclusions Item response theory analyses revealed results not yielded by classical test theory. Thus, both approaches should be routinely combined to increase the information yield of examination data.
科研通智能强力驱动
Strongly Powered by AbleSci AI