过采样
重采样
分类器(UML)
计算机科学
人工智能
机器学习
班级(哲学)
模式识别(心理学)
带宽(计算)
计算机网络
作者
Zeyu Teng,Peng Cao,Min Huang,Zheming Gao,Xingwei Wang
标识
DOI:10.1016/j.patcog.2023.109953
摘要
Class imbalance problem commonly exists in multi-label classification (MLC) tasks. It has non-negligible impacts on the classifier performance and has drawn extensive attention in recent years. Borderline oversampling has been widely used in single-label learning as a competitive technique in dealing with class imbalance. Nevertheless, the borderline samples in multi-label data sets (MLDs) have not been studied. Hence, this paper deeply discussed the borderline samples in MLDs and found they have different neighboring relationships with class borders, which makes their roles different in the classifier training. For that, they are divided into two types named the self-borderline samples and the cross-borderline samples. Further, a novel MLDs resampling approach called Multi-Label Borderline Oversampling Technique (MLBOTE) is proposed for multi-label imbalanced learning. MLBOTE identifies three types of seed samples, including interior, self-borderline, and cross-borderline samples, and different oversampling mechanisms are designed for them, respectively. Meanwhile, it regards not only the minority classes but also the classes suffering from one-vs-rest imbalance as those in need of oversampling. Experiments on eight data sets with nine MLC algorithms and three base classifiers are carried out to compare MLBOTE with some state-of-art MLDs resampling techniques. The results show MLBOTE outperforms other methods in various scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI