自举(财务)
逻辑回归
统计
交叉验证
样本量测定
样品(材料)
回归分析
预测建模
判别式
计算机科学
数学
计量经济学
人工智能
色谱法
化学
作者
Ewout W. Steyerberg,Frank E. Harrell,Gerard Borsboom,Marinus J.C. Eijkemans,Yvonne Vergouwe,J. Dik F. Habbema
标识
DOI:10.1016/s0895-4356(01)00341-9
摘要
The performance of a predictive model is overestimated when simply determined on the sample of subjects that was used to construct the model. Several internal validation methods are available that aim to provide a more accurate estimate of model performance in new subjects. We evaluated several variants of split-sample, cross-validation and bootstrapping methods with a logistic regression model that included eight predictors for 30-day mortality after an acute myocardial infarction. Random samples with a size between n = 572 and n = 9165 were drawn from a large data set (GUSTO-I; n = 40,830; 2851 deaths) to reflect modeling in data sets with between 5 and 80 events per variable. Independent performance was determined on the remaining subjects. Performance measures included discriminative ability, calibration and overall accuracy. We found that split-sample analyses gave overly pessimistic estimates of performance, with large variability. Cross-validation on 10% of the sample had low bias and low variability, but was not suitable for all performance measures. Internal validity could best be estimated with bootstrapping, which provided stable estimates with low bias. We conclude that split-sample validation is inefficient, and recommend bootstrapping for estimation of internal validity of a predictive logistic regression model.
科研通智能强力驱动
Strongly Powered by AbleSci AI