计算机科学
班级(哲学)
机器学习
人工智能
采样(信号处理)
简单随机抽样
事件(粒子物理)
数据挖掘
简单(哲学)
哲学
人口
人口学
社会学
物理
认识论
滤波器(信号处理)
量子力学
计算机视觉
作者
Gustavo E. A. P. A. Batista,Ronaldo C. Prati,Maria Carolina Monard
出处
期刊:SIGKDD explorations
[Association for Computing Machinery]
日期:2004-06-01
卷期号:6 (1): 20-29
被引量:3343
标识
DOI:10.1145/1007730.1007735
摘要
There are several aspects that might influence the performance achieved by existing learning systems. It has been reported that one of these aspects is related to class imbalance in which examples in training data belonging to one class heavily outnumber the examples in the other class. In this situation, which is found in real world data describing an infrequent but important event, the learning system may have difficulties to learn the concept related to the minority class. In this work we perform a broad experimental evaluation involving ten methods, three of them proposed by the authors, to deal with the class imbalance problem in thirteen UCI data sets. Our experiments provide evidence that class imbalance does not systematically hinder the performance of learning systems. In fact, the problem seems to be related to learning with too few minority class examples in the presence of other complicating factors, such as class overlapping. Two of our proposed methods deal with these conditions directly, allying a known over-sampling method with data cleaning methods in order to produce better-defined class clusters. Our comparative experiments show that, in general, over-sampling methods provide more accurate results than under-sampling methods considering the area under the ROC curve (AUC). This result seems to contradict results previously published in the literature. Two of our proposed methods, Smote + Tomek and Smote + ENN, presented very good results for data sets with a small number of positive examples. Moreover, Random over-sampling, a very simple over-sampling method, is very competitive to more complex over-sampling methods. Since the over-sampling methods provided very good performance results, we also measured the syntactic complexity of the decision trees induced from over-sampled data. Our results show that these trees are usually more complex then the ones induced from original data. Random over-sampling usually produced the smallest increase in the mean number of induced rules and Smote + ENN the smallest increase in the mean number of conditions per rule, when compared among the investigated over-sampling methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI