等级间信度
卡帕
科恩卡帕
范畴变量
统计
置信区间
协议
一致性
可靠性(半导体)
医学
心理学
数学
评定量表
内科学
语言学
量子力学
物理
哲学
功率(物理)
几何学
作者
Steinijans Vw,E Diletti,B Bömches,Christian Greis,P Solleder
出处
期刊:PubMed
日期:1997-03-01
卷期号:35 (3): 93-5
被引量:11
摘要
A widely accepted approach to evaluate interrater reliability for categorical responses involves the rating of n subjects by at least 2 raters. Frequently, there are only 2 response categories, such as positive or negative diagnosis. The same approach is commonly used to assess the concordant classification by 2 diagnostic methods. Depending on whether one uses the percent agreement as such or corrected for that expected by chance, i.e. Cohen's kappa coefficient, one can get quite different values. This short communication demonstrates that Cohen's kappa coefficient of agreement between 2 raters or 2 diagnostic methods based on binary (yes/no) responses does not parallel the percentage of patients with congruent classifications. Therefore, it may be of limited value in the assessment of increases in the interrater reliability due to an improved diagnostic method. The percentage of patients with congruent classifications is of easier clinical interpretation, however, does not account for the percent of agreement expected by chance. We, therefore, recommend to present both, the percentage of patients with congruent classifications, and Cohen's kappa coefficient with 95% confidence limits.
科研通智能强力驱动
Strongly Powered by AbleSci AI