估计员
杠杆(统计)
渐近分布
一致性(知识库)
数学
逻辑回归
计算机科学
均方误差
算法
缩小
数学优化
统计
人工智能
作者
Hai Ying Wang,Rong Zhu,Ping Ma
标识
DOI:10.1080/01621459.2017.1292914
摘要
For massive data, the family of subsampling algorithms is popular to downsize the data volume and reduce computational burden. Existing studies focus on approximating the ordinary least-square estimate in linear regression, where statistical leverage scores are often used to define subsampling probabilities. In this article, we propose fast subsampling algorithms to efficiently approximate the maximum likelihood estimate in logistic regression. We first establish consistency and asymptotic normality of the estimator from a general subsampling algorithm, and then derive optimal subsampling probabilities that minimize the asymptotic mean squared error of the resultant estimator. An alternative minimization criterion is also proposed to further reduce the computational cost. The optimal subsampling probabilities depend on the full data estimate, so we develop a two-step algorithm to approximate the optimal subsampling procedure. This algorithm is computationally efficient and has a significant reduction in computing time compared to the full data approach. Consistency and asymptotic normality of the estimator from a two-step algorithm are also established. Synthetic and real datasets are used to evaluate the practical performance of the proposed method. Supplementary materials for this article are available online.
科研通智能强力驱动
Strongly Powered by AbleSci AI