组内相关
等级间信度
汇报
克朗巴赫阿尔法
可靠性(半导体)
心理学
破折号
心理测量学
临床心理学
概化理论
应用心理学
医学
计算机科学
社会心理学
发展心理学
评定量表
操作系统
物理
功率(物理)
量子力学
作者
Marisa Brett-Fleegler,Jenny W. Rudolph,Walter Eppich,Michael C. Monuteaux,Eric W. Fleegler,Adam Cheng,Robert Simon
出处
期刊:Simulation in healthcare : journal of the Society for Simulation in Healthcare
[Ovid Technologies (Wolters Kluwer)]
日期:2012-10-01
卷期号:7 (5): 288-294
被引量:249
标识
DOI:10.1097/sih.0b013e3182620228
摘要
This study examined the reliability of the scores of an assessment instrument, the Debriefing Assessment for Simulation in Healthcare (DASH), in evaluating the quality of health care simulation debriefings. The secondary objective was to evaluate whether the instrument's scores demonstrate evidence of validity.Two aspects of reliability were examined, interrater reliability and internal consistency. To assess interrater reliability, intraclass correlations were calculated for 114 simulation instructors enrolled in webinar training courses in the use of the DASH. The instructors reviewed a series of 3 standardized debriefing sessions. To assess internal consistency, Cronbach α was calculated for this cohort. Finally, 1 measure of validity was examined by comparing the scores across 3 debriefings of different quality.Intraclass correlation coefficients for the individual elements were predominantly greater than 0.6. The overall intraclass correlation coefficient for the combined elements was 0.74. Cronbach α was 0.89 across the webinar raters. There were statistically significant differences among the ratings for the 3 standardized debriefings (P < 0.001).The DASH scores showed evidence of good reliability and preliminary evidence of validity. Additional work will be needed to assess the generalizability of the DASH based on the psychometrics of DASH data from other settings.
科研通智能强力驱动
Strongly Powered by AbleSci AI