递归(计算机科学)
人工神经网络
正规化(语言学)
静止点
算法
计算机科学
子空间拓扑
缩小
数学
符号
应用数学
离散数学
数学优化
人工智能
算术
数学分析
作者
Hui Zhang,Zhengpeng Yuan,Naihua Xiu
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2021-12-13
卷期号:34 (9): 5882-5896
被引量:4
标识
DOI:10.1109/tnnls.2021.3131406
摘要
Rectified linear unit (ReLU) deep neural network (DNN) is a classical model in deep learning and has achieved great success in many applications. However, this model is characterized by too many parameters, which not only requires huge memory but also imposes unbearable computation burden. The l2,0 regularization has become a useful technique to cope with this trouble. In this article, we design a recursion Newton-like algorithm (RNLA) to simultaneously train and compress ReLU-DNNs with l2,0 regularization. First, we reformulate the multicomposite training model into a constrained optimization problem by explicitly introducing the network nodes as the variables of the optimization. Based on the penalty function of the reformulation, we obtain two types of minimization subproblems. Second, we build the first-order optimality conditions for acquiring P-stationary points of the two subproblems, and these P-stationary points enable us to equivalently derive two sequences of stationary equations, which are piecewise linear matrix equations. We solve these equations by the column Newton-like method in group sparse subspace with lower computational scale and cost. Finally, numerical experiments are conducted on real datasets, and the results demonstrate that the proposed method RNLA is effective and applicable.
科研通智能强力驱动
Strongly Powered by AbleSci AI