判别式
人工智能
计算机科学
特征选择
模式识别(心理学)
人工神经网络
非线性系统
梯度下降
预处理器
特征(语言学)
数据预处理
正规化(语言学)
机器学习
哲学
物理
量子力学
语言学
作者
Rong Wang,Jintang Bian,Feiping Nie,Xuelong Li
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2023-11-01
卷期号:34 (11): 9493-9505
被引量:2
标识
DOI:10.1109/tnnls.2022.3209716
摘要
Feature selection is an important and effective data preprocessing method, which can remove the noise and redundant features while retaining the relevant and discriminative features in high-dimensional data. In real-world applications, the relationships between data samples and their labels are usually nonlinear. However, most of the existing feature selection models focus on learning a linear transformation matrix, which cannot capture such a nonlinear structure in practice and will degrade the performance of downstream tasks. To address the issue, we propose a novel nonlinear feature selection method to select those most relevant and discriminative features in high-dimensional dataset. Specifically, our method learns the nonlinear structure of high-dimensional data by a neural network with cross entropy loss function, and then using the structured sparsity norm such as l2,p -norm to regularize the weights matrix connecting the input layer and the first hidden layer of the neural network model to learn weight of each feature. Therefore, a structural sparse weights matrix is obtained by conducting nonlinear learning based on a neural network with structured sparsity regularization. Then, we use the gradient descent method to achieve the optimal solution of the proposed model. Evaluating the experimental results on several synthetic datasets and real-world datasets shows the effectiveness and superiority of the proposed nonlinear feature selection model.
科研通智能强力驱动
Strongly Powered by AbleSci AI