计算机科学
人工智能
稀疏逼近
机器学习
神经编码
聚类分析
特征学习
稀疏矩阵
深度学习
模式识别(心理学)
量子力学
物理
高斯分布
作者
Shiping Wang,Zhaoliang Chen,Shide Du,Zhouchen Lin
标识
DOI:10.1109/tpami.2021.3082632
摘要
Sparsity-constrained optimization problems are common in machine learning, such as sparse coding, low-rank minimization and compressive sensing. However, most of previous studies focused on constructing various hand-crafted sparse regularizers, while little work was devoted to learning adaptive sparse regularizers from given input data for specific tasks. In this paper, we propose a deep sparse regularizer learning model that learns data-driven sparse regularizers adaptively. Via the proximal gradient algorithm, we find that the sparse regularizer learning is equivalent to learning a parameterized activation function. This encourages us to learn sparse regularizers in the deep learning framework. Therefore, we build a neural network composed of multiple blocks, each being differentiable and reusable. All blocks contain learnable piecewise linear activation functions which correspond to the sparse regularizer to be learned. Furthermore, the proposed model is trained with back propagation, and all parameters in this model are learned end-to-end. We apply our framework to multi-view clustering and semi-supervised classification tasks to learn a latent compact representation. Experimental results demonstrate the superiority of the proposed framework over state-of-the-art multi-view learning models.
科研通智能强力驱动
Strongly Powered by AbleSci AI