可解释性
正规化(语言学)
计算机科学
特征选择
人工智能
人工神经网络
特征(语言学)
Lasso(编程语言)
一般化
提前停车
机器学习
算法
模式识别(心理学)
数学
哲学
万维网
数学分析
语言学
作者
Shengyuan Xu,Zhiqi Bu,Pratik Chaudhari,Ian Barnett
标识
DOI:10.1007/978-3-031-43418-1_21
摘要
Interpretable machine learning has demonstrated impressive performance while preserving explainability. In particular, neural additive models (NAM) offer the interpretability to the black-box deep learning and achieve state-of-the-art accuracy among the large family of generalized additive models. In order to empower NAM with feature selection and improve the generalization, we propose the sparse neural additive models (SNAM) that employ the group sparsity regularization (e.g. Group LASSO), where each feature is learned by a sub-network whose trainable parameters are clustered as a group. We study the theoretical properties for SNAM with novel techniques to tackle the non-parametric truth, thus extending from classical sparse linear models such as the LASSO, which only works on the parametric truth. Specifically, we show that SNAM with subgradient and proximal gradient descents provably converges to zero training loss as $$t\rightarrow \infty $$ , and that the estimation error of SNAM vanishes asymptotically as $$n\rightarrow \infty $$ . We also prove that SNAM, similar to LASSO, can have exact support recovery, i.e. perfect feature selection, with appropriate regularization. Moreover, we show that the SNAM can generalize well and preserve the ‘identifiability’, recovering each feature’s effect. We validate our theories via extensive experiments and further testify to the good accuracy and efficiency of SNAM (Appendix can be found at https://arxiv.org/abs/2202.12482 .).
科研通智能强力驱动
Strongly Powered by AbleSci AI