计算机科学
反向传播
人工智能
深度学习
参数化复杂度
机器学习
人工神经网络
可微函数
树(集合论)
过程(计算)
级联
决策树
钥匙(锁)
算法
数学
工程类
化学工程
操作系统
计算机安全
数学分析
作者
Ming Pang,Kai Ming Ting,Peng Zhao,Zhi‐Hua Zhou
标识
DOI:10.1109/icdm.2018.00158
摘要
Most studies about deep learning are based on neural network models, where many layers of parameterized nonlinear differentiable modules are trained by backpropagation. Recently, it has been shown that deep learning can also be realized by non-differentiable modules without backpropagation training called deep forest. The developed representation learning process is based on a cascade of cascades of decision tree forests, where the high memory requirement and the high time cost inhibit the training of large models. In this paper, we propose a simple yet effective approach to improve the efficiency of deep forest. The key idea is to pass the instances with high confidence directly to the final stage rather than passing through all the levels. We also provide a theoretical analysis suggesting a means to vary the model complexity from low to high as the level increases in the cascade, which further reduces the memory requirement and time cost. Our experiments show that the proposed approach achieves highly competitive predictive performance with significantly reduced time cost and memory requirement by up to one order of magnitude.
科研通智能强力驱动
Strongly Powered by AbleSci AI