判别式
人工智能
特征选择
模式识别(心理学)
计算机科学
维数之咒
特征向量
正规化(语言学)
降维
特征(语言学)
机器学习
哲学
语言学
作者
Qingwei Jia,Tingquan Deng,Shilong Wang,Changzhong Wang
标识
DOI:10.1016/j.patcog.2024.110583
摘要
Feature selection is a key technique to tackle the curse of dimensionality in multi-label learning. Lots of embedded multi-label feature selection methods have been developed. However, they face challenges in identifying and excluding redundant features. To address these issues, this paper proposes a multi-label feature selection method that combines robust structural learning and discriminative label regularization. The proposed method starts from the feature space rather than data space, motivated by the principle that redundant features have high similarity or strong correlation. To exclude redundant features, a regularization on the feature selection matrix is designed by combining ℓ2,1-norm penalty with inner products of feature weight vectors. This regularization can help to learn a robust structure in the feature selection matrix. Meanwhile, both of the similarity and dissimilarity of labels of instances are involved in exploring discriminative label correlations. Extensive experiments verified the effectiveness of the proposed model for feature selection.
科研通智能强力驱动
Strongly Powered by AbleSci AI