数据提取
可靠性(半导体)
背景(考古学)
度量(数据仓库)
统计
计算机科学
相似性(几何)
数据挖掘
计量经济学
心理学
人工智能
数学
梅德林
物理
法学
功率(物理)
古生物学
图像(数学)
生物
量子力学
政治学
作者
Daniel D. Drevon,Allison M. Peart,Elizabeth T. Koval
标识
DOI:10.1080/2372966x.2023.2273822
摘要
Meta-analyzing data from single-case experimental designs (SCEDs) usually requires data extraction, a process by which numerical values are obtained from linear graphs in primary studies, prior to calculating and aggregating single-case effect measures. Existing research suggests data extraction yields reliable and valid data; however, we have an incomplete understanding of the downstream effects of relying on data extracted by two or more people. This study was undertaken to enhance that understanding in the context of SCEDs published in school psychology journals. Data for 91 unique outcomes across 67 cases in 20 SCEDs were extracted by two data extractors. Four different single-case effect measures were calculated using data extracted by each data extractor and then compared to determine the similarity of the effect measures. Overall, intercoder reliability metrics suggested a high degree of agreement, and there were minimal differences in single-case effect measures calculated from data extracted by different researchers. Intercoder reliability metrics and differences in single-case effect measures were generally negatively related, though the strength varied depending on the single-case effect measure. Hence, it is unlikely that the small differences in effect measure estimates due to the slight unreliability of the data extraction process would have a considerable impact on the interpretation of single-case effect measures.
科研通智能强力驱动
Strongly Powered by AbleSci AI