等级间信度
协议
心理学
序数数据
统计
数学
发展心理学
语言学
哲学
评定量表
出处
期刊:John Wiley & Sons, Ltd eBooks
[Wiley]
日期:2016-10-07
卷期号:: 232-254
被引量:3
标识
DOI:10.1002/9781119407201.ch11
摘要
This chapter focuses on three measures of interrater agreement, including Cohen's kappa, Scott's pi, and Krippendorff's alpha, which researchers use to assess reliability in content analyses. Statisticians generally consider kappa the most popular measure of agreement for categorical data. Weighted kappa became an important measure in the social sciences, allowing researchers to move beyond unordered nominal categories to measures containing ordered observations. The intraclass correlation coefficient serves as a viable option for testing agreement when more than two raters assess ordinal content. A key concern in using an intraclass correlation coefficient as a measure of agreement is the selection of the correct ICC statistic. Intraclass correlation coefficients also provide indications of reliability with ordinal data, as does Kendal's coefficient of concordance. The chapter offers SPSS instructions for computing kappa and intraclass correlation coefficients.
科研通智能强力驱动
Strongly Powered by AbleSci AI