压缩(物理)
计算机科学
算法
一般化
数据压缩
分解
秩(图论)
数据压缩比
张量(固有定义)
矩阵分解
人工神经网络
人工智能
图像压缩
数学
特征向量
生态学
材料科学
纯数学
复合材料
生物
数学分析
图像(数学)
物理
组合数学
量子力学
图像处理
作者
Weirong Liu,Peidong Liu,Changhong Shi,Zhiqiang Zhang,Zhijun Li,Chaorong Liu
摘要
Summary As a deep neural networks (DNNs) model compression method, learning‐compression (LC) algorithm based on pre‐trained models and matrix decomposition increases training time and ignores the structural information of models. In this manuscript, a tensor decomposition‐based direct LC (TDLC) algorithm without pre‐trained models is proposed. In TDLC, the pre‐trained model is eliminated, and tensor decomposition is first applied to LC algorithm to preserve the structural features of the model. There are two key steps in TDLC. An optimal rank selection method is first proposed in compression‐step (C‐step) of TDLC to find global optimal ranks of tensor decomposition. Second, TDLC utilizes cyclical learning rate, which is different from traditional monotonically learning rates schedule, to improve the generalization performance of uncompressed models in learning‐step (L‐step). TDLC obtains the optimal compression model by alternately optimizing L‐step and C‐step. TDLC is compared with 16 state‐of‐the‐art compression methods in experiments part. Extensive experimental results show that TDLC produces high‐accuracy compression models with high compression rate. Comparing with TDLC‐pre‐trained, TDLC notably achieves 30% training time shorten and 11% parameter reduction in Resnet32, while improving accuracy by 0.2%.
科研通智能强力驱动
Strongly Powered by AbleSci AI