可解释性
深度学习
人工智能
计算机科学
机器学习
人工神经网络
突出
深层神经网络
偏最小二乘回归
作者
Xiangyin Kong,Zhiqiang Ge
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2023-11-01
卷期号:34 (11): 8923-8937
被引量:8
标识
DOI:10.1109/tnnls.2022.3154090
摘要
The salient progress of deep learning is accompanied by nonnegligible deficiencies, such as: 1) interpretability problem; 2) requirement for large data amounts; 3) hard to design and tune parameters; and 4) heavy computation complexity. Despite the remarkable achievements of neural networks-based deep models in many fields, the practical applications of deep learning are still limited by these shortcomings. This article proposes a new concept called the lightweight deep model (LDM). LDM absorbs the useful ideas of deep learning and overcomes their shortcomings to a certain extent. We explore the idea of LDM from the perspective of partial least squares (PLS) by constructing a deep PLS (DPLS) model. The feasibility and merits of DPLS are proved theoretically, after that, DPLS is further generalized to a more common form (GDPLS) by adding a nonlinear mapping layer between two cascaded PLS layers in the model structure. The superiority of DPLS and GDPLS is demonstrated through four practical cases involving two regression problems and two classification tasks, in which our model not only achieves competitive performance compared with existing neural networks-based deep models but also is proven to be a more interpretable and efficient method, and we know exactly how it improves performance, how it gives correct results. Note that our proposed model can only be regarded as an alternative to fully connected neural networks at present and cannot completely replace the mature deep vision or language models.
科研通智能强力驱动
Strongly Powered by AbleSci AI