人工智能
模式识别(心理学)
特征学习
计算机科学
判别式
降维
稀疏逼近
子空间拓扑
特征选择
非线性降维
特征向量
拉普拉斯矩阵
机器学习
图形
理论计算机科学
作者
Mei‐Yu Huang,Hongmei Chen,Yong Mi,Chuan Luo,Shi‐Jinn Horng,Tianrui Li
标识
DOI:10.1016/j.knosys.2023.111105
摘要
As an effective dimensionality reduction method, unsupervised feature selection (UFS) focuses on the mutual correlations between high-dimensional data features but often overlooks the intrinsic relationships between instances. We also utilize pseudo-labels learned from the data to guide feature selection in UFS. However, the raw data space may contain noise and outliers, leading to a lower accuracy of the learned pseudo-label matrix. We propose a minimum-redundant UFS approach to tackle these problems through jointing sparse latent representation learning with dual manifold regularization (SLRDR). Firstly, SLRDR learns a subspace of latent representation by exploring the interconnection of original data. To enhance subspace sparsity, ℓ2,1-norm is applied to the residual matrix of latent representation learning. Pseudo-label matrix learning is then carried out in the high-quality latent space, resulting in effective pseudo-label information that can provide more useful guidance for sparse regression. Secondly, based on the manifold learning hypothesis, SLRDR exploits features' local structural properties in feature space and explores the association between data and labels, allowing the model to learn richer and more accurate structural information. In addition, ℓ2,1/2-norm is imposed on the weight matrix to obtain a minimum-redundant solution and select more discriminative features. Finally, an alternating iterative method is used for SLRDR to solve the optimization problem of the objective function, and the convergence of the model is theoretically analyzed. Besides, a series of comparative experiments with ten existing algorithms on nine benchmark datasets are used to verify the model's effectiveness.
科研通智能强力驱动
Strongly Powered by AbleSci AI