组内相关
概化理论
等级间信度
观察研究
可靠性(半导体)
统计
心理学
差异(会计)
心理信息
计量经济学
计算机科学
精算学
数学
梅德林
心理测量学
评定量表
经济
功率(物理)
物理
会计
量子力学
政治学
法学
作者
Debby ten Hove,Terrence D. Jorgensen,L. Andries van der Ark
摘要
Several intraclass correlation coefficients (ICCs) are available to assess the interrater reliability (IRR) of observational measurements. Selecting an ICC is complicated, and existing guidelines have three major limitations. First, they do not discuss incomplete designs, in which raters partially vary across subjects. Second, they provide no coherent perspective on the error variance in an ICC, clouding the choice between the available coefficients. Third, the distinction between fixed or random raters is often misunderstood. Based on generalizability theory (GT), we provide updated guidelines on selecting an ICC for IRR, which are applicable to both complete and incomplete observational designs. We challenge conventional wisdom about ICCs for IRR by claiming that raters should seldom (if ever) be considered fixed. Also, we clarify how to interpret ICCs in the case of unbalanced and incomplete designs. We explain four choices a researcher needs to make when selecting an ICC for IRR, and guide researchers through these choices by means of a flowchart, which we apply to three empirical examples from clinical and developmental domains. In the Discussion, we provide guidance in reporting, interpreting, and estimating ICCs, and propose future directions for research into the ICCs for IRR. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
科研通智能强力驱动
Strongly Powered by AbleSci AI