复制(统计)
样本量测定
贝叶斯因子
空(SQL)
无效假设
出版偏见
复制
样品(材料)
贝叶斯定理
贝叶斯概率
统计假设检验
计量经济学
心理学
统计
认知心理学
计算机科学
生物
荟萃分析
数学
医学
数据挖掘
病理
物理
热力学
作者
Alexander Etz,Joachim Vandekerckhove
出处
期刊:PLOS ONE
[Public Library of Science]
日期:2016-02-26
卷期号:11 (2): e0149794-e0149794
被引量:284
标识
DOI:10.1371/journal.pone.0149794
摘要
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.
科研通智能强力驱动
Strongly Powered by AbleSci AI