可解释性
集成学习
计算机科学
分类器(UML)
随机森林
人工智能
机器学习
决策树
水准点(测量)
级联分类器
模式识别(心理学)
随机子空间法
大地测量学
地理
作者
Xudong Luo,Long Ye,Xiaolan Liu,Xiaohao Wen,MengChu Zhou,Qin Zhang
标识
DOI:10.1109/tnnls.2023.3290203
摘要
To construct a strong classifier ensemble, base classifiers should be accurate and diverse. However, there is no uniform standard for the definition and measurement of diversity. This work proposes a learners' interpretability diversity (LID) to measure the diversity of interpretable machine learners. It then proposes a LID-based classifier ensemble. Such an ensemble concept is novel because: 1) interpretability is used as an important basis for diversity measurement and 2) before its training, the difference between two interpretable base learners can be measured. To verify the proposed method's effectiveness, we choose a decision-tree-initialized dendritic neuron model (DDNM) as a base learner for ensemble design. We apply it to seven benchmark datasets. The results show that the DDNM ensemble combined with LID obtains superior performance in terms of accuracy and computational efficiency compared to some popular classifier ensembles. A random-forest-initialized dendritic neuron model (RDNM) combined with LID is an outstanding representative of the DDNM ensemble.
科研通智能强力驱动
Strongly Powered by AbleSci AI