计算机化自适应测验
计算机科学
背景(考古学)
选择(遗传算法)
选型
样品(材料)
机器学习
航程(航空)
样本量测定
校准
项目库
人工智能
数据挖掘
项目反应理论
计量经济学
统计
心理测量学
数学
古生物学
复合材料
化学
材料科学
生物
色谱法
作者
Miguel A. Sorrel,Francisco J. Abad,Pablo Nájera
标识
DOI:10.1177/0146621620977682
摘要
Decisions on how to calibrate an item bank might have major implications in the subsequent performance of the adaptive algorithms. One of these decisions is model selection, which can become problematic in the context of cognitive diagnosis computerized adaptive testing, given the wide range of models available. This article aims to determine whether model selection indices can be used to improve the performance of adaptive tests. Three factors were considered in a simulation study, that is, calibration sample size, Q-matrix complexity, and item bank length. Results based on the true item parameters, and general and single reduced model estimates were compared to those of the combination of appropriate models. The results indicate that fitting a single reduced model or a general model will not generally provide optimal results. Results based on the combination of models selected by the fit index were always closer to those obtained with the true item parameters. The implications for practical settings include an improvement in terms of classification accuracy and, consequently, testing time, and a more balanced use of the item bank. An R package was developed, named cdcatR, to facilitate adaptive applications in this context.
科研通智能强力驱动
Strongly Powered by AbleSci AI