缺少数据
插补(统计学)
Boosting(机器学习)
聚类分析
计算机科学
统计
梯度升压
人工智能
集成学习
人工神经网络
随机森林
模式识别(心理学)
数据挖掘
机器学习
数学
作者
Manar D. Samad,Sakib Abrar,Norou Diawara
标识
DOI:10.1016/j.knosys.2022.108968
摘要
Missing values in tabular data restrict the use and performance of machine learning, requiring the imputation of missing values. The most popular imputation algorithm is arguably multiple imputations using chains of equations (MICE), which estimates missing values from linear conditioning on observed values. This paper proposes methods to improve both the imputation accuracy of MICE and the classification accuracy of imputed data by replacing MICE's linear regressors with ensemble learning and deep neural networks (DNN). The imputation accuracy is further improved by characterizing individual samples with cluster labels (CISCL) obtained from the training data. Our extensive analyses involving six tabular data sets, up to 80% missing values, and three missing types (missing completely at random, missing at random, missing not at random) reveal that ensemble or deep learning within MICE is superior to the baseline MICE (b-MICE), both of which are consistently outperformed by CISCL. Results show that CISCL + b-MICE outperforms b-MICE for all percentages and types of missingness. Our proposed DNN-based MICE and gradient boosting MICE plus CISCL (GB-MICE-CISCL) outperform seven state-of-the-art imputation algorithms in most experimental cases. The classification accuracy of GB-MICE imputed data is further improved by our proposed GB-MICE-CISCL imputation method across all missingness percentages. Results also reveal a shortcoming of the MICE framework at high missingness (>50%) and when the missing type is not random. This paper provides a generalized approach to identifying the best imputation model for a data set with a missingness percentage and type.
科研通智能强力驱动
Strongly Powered by AbleSci AI