心理学
可靠性(半导体)
两种选择强迫选择
计量经济学
项目反应理论
脱离理论
构造(python库)
响应偏差
蒙特卡罗方法
社会心理学
随机效应模型
统计
心理测量学
计算机科学
认知心理学
数学
发展心理学
功率(物理)
老年学
医学
物理
荟萃分析
量子力学
内科学
程序设计语言
作者
Siwei Peng,Kaiwen Man,Bernard P. Veldkamp,Yan Cai,Dongbo Tu
标识
DOI:10.1177/10944281231181642
摘要
For various reasons, respondents to forced-choice assessments (typically used for noncognitive psychological constructs) may respond randomly to individual items due to indecision or globally due to disengagement. Thus, random responding is a complex source of measurement bias and threatens the reliability of forced-choice assessments, which are essential in high-stakes organizational testing scenarios, such as hiring decisions. The traditional measurement models rely heavily on nonrandom, construct-relevant responses to yield accurate parameter estimates. When survey data contain many random responses, fitting traditional models may deliver biased results, which could attenuate measurement reliability. This study presents a new forced-choice measure-based mixture item response theory model (called M-TCIR) for simultaneously modeling normal and random responses (distinguishing completely and incompletely random). The feasibility of the M-TCIR was investigated via two Monte Carlo simulation studies. In addition, one empirical dataset was analyzed to illustrate the applicability of the M-TCIR in practice. The results revealed that most model parameters were adequately recovered, and the M-TCIR was a viable alternative to model both aberrant and normal responses with high efficiency.
科研通智能强力驱动
Strongly Powered by AbleSci AI