特征选择
人工智能
计算机科学
特征向量
特征(语言学)
模式识别(心理学)
特征学习
判别式
嵌入
机器学习
哲学
语言学
作者
Wangyang Ying,Dongjie Wang,Haifeng Chen,Yanjie Fu
出处
期刊:ACM Transactions on Knowledge Discovery From Data
[Association for Computing Machinery]
日期:2024-08-09
卷期号:18 (9): 1-21
摘要
Feature selection aims to identify the most pattern-discriminative feature subset. In prior literature, filter (e.g., backward elimination) and embedded (e.g., LASSO) methods have hyperparameters (e.g., top- k , score thresholding) and tie to specific models, thus, hard to generalize; wrapper methods search a feature subset in a huge discrete space and is computationally costly. To transform the way of feature selection, we regard a selected feature subset as a selection decision token sequence and reformulate feature selection as a deep sequential generative learning task that distills feature knowledge and generates decision sequences. Our method includes three steps: (1) We develop a deep variational transformer model over a joint of sequential reconstruction, variational, and performance evaluator losses. Our model can distill feature selection knowledge and learn a continuous embedding space to map feature selection decision sequences into embedding vectors associated with utility scores. (2) We leverage the trained feature subset utility evaluator as a gradient provider to guide the identification of the optimal feature subset embedding; (3) We decode the optimal feature subset embedding to autoregressively generate the best feature selection decision sequence with autostop. Extensive experimental results show this generative perspective is effective and generic, without large discrete search space and expert-specific hyperparameters. The code is available at http://tinyurl.com/FSDSGL .
科研通智能强力驱动
Strongly Powered by AbleSci AI