Lasso(编程语言)
极小极大
甲骨文公司
特征选择
变量(数学)
选择(遗传算法)
弹性网正则化
数学优化
数学
计算机科学
选型
算法
应用数学
人工智能
软件工程
万维网
数学分析
标识
DOI:10.1198/016214506000000735
摘要
AbstractThe lasso is a popular technique for simultaneous estimation and variable selection. Lasso variable selection has been shown to be consistent under certain conditions. In this work we derive a necessary condition for the lasso variable selection to be consistent. Consequently, there exist certain scenarios where the lasso is inconsistent for variable selection. We then propose a new version of the lasso, called the adaptive lasso, where adaptive weights are used for penalizing different coefficients in the ℓ1 penalty. We show that the adaptive lasso enjoys the oracle properties; namely, it performs as well as if the true underlying model were given in advance. Similar to the lasso, the adaptive lasso is shown to be near-minimax optimal. Furthermore, the adaptive lasso can be solved by the same efficient algorithm for solving the lasso. We also discuss the extension of the adaptive lasso in generalized linear models and show that the oracle properties still hold under mild regularity conditions. As a byproduct of our theory, the nonnegative garotte is shown to be consistent for variable selection.KEY WORDS: Asymptotic normalityLassoMinimaxOracle inequalityOracle procedureVariable selection
科研通智能强力驱动
Strongly Powered by AbleSci AI