过度拟合
可解释性
人工智能
梯度升压
Boosting(机器学习)
计算机科学
人工神经网络
机器学习
梯度下降
深度学习
卷积神经网络
集成学习
回归
模式识别(心理学)
数学
随机森林
统计
作者
Jianwei Dong,Yumin Chen,Bingyu Yao,Xiao Zhang,Nianfeng Zeng
标识
DOI:10.1016/j.asoc.2022.109067
摘要
The boosting model is a kind of ensemble learning technology, including XGBoost and GBDT, which take decision trees as weak classifiers and achieve better results in classification and regression problems. The neural network has an excellent performance on image and voice recognition, but its weak interpretability limits on developing a fusion model. By referring to principles and methods of traditional boosting models, we proposed a Neural Network Boosting (NNBoost) regression, which takes shallow neural networks with simple structures as weak classifiers. The NNBoost is a new ensemble learning method, which obtains low regression errors on several data sets. The target loss function of NNBoost is approximated by the Taylor expansion. By inducing the derivative form of NNBoost, we give a gradient descent algorithm. The structure of deep learning is complex, and there are some problems such as gradient disappearing, weak interpretability, and parameters difficult to be adjusted. We use the integration of simple neural networks to alleviate the gradient vanishing problem which is laborious to be solved in deep learning, and conquer the overfitting of a learning algorithm. Finally, through testing on some experiments, the correctness and effectiveness of NNBoost are verified from multiple angles, the effect of multiple shallow neural network fusion is proved, and the development path of boosting idea and deep learning is widened to a certain extent.
科研通智能强力驱动
Strongly Powered by AbleSci AI