Recently, the massive growth of IoT devices and Internet data, which are widely used in many applications, including industry and healthcare, has dramatically increased the amount of free unlabeled data collected. However, this unlabeled data is useless if we want to learn supervised machine learning models. The expensive and time-consuming cost of labeling makes the problem even more challenging. Here, the active learning (AL) technique provides a solution by labeling small but highly informative and representative data, which guarantees a high degree of generalizability over space and improves classification performance with data we have never seen before. The task is more difficult when the active learner has no predefined knowledge, such as initial training data, and when the obtained data is incomplete (i.e., contains missing values). In previous studies, the missing data should first be imputed. Then, the active learner selects from the available unlabeled data, regardless of whether the points were originally observed or imputed. However, selecting inaccurate imputed data points would negatively affect the active learner and prevent it from selecting informative and/or representative points, thus reducing the overall classification performance of the prediction models. This motivated us to introduce a novel query selection strategy that accounts for imputation uncertainty when querying new points. For this purpose, we first introduce a novel multiple imputation method that considers feature importance in selecting the most promising feature groups for missing values estimation. This multiple imputation method provides the ability to quantify the imputation uncertainty of each imputed data point. Furthermore, in each of the two phases of the proposed active learner (exploration and exploitation), imputation uncertainty is taken into account to reduce the probability of selecting points with high imputation uncertainty. We tested the effectiveness of the proposed active learner on different binary and multiclass datasets with different missing rates.