计算机科学
度量(数据仓库)
钥匙(锁)
矩阵分解
因式分解
代表(政治)
算法
人工智能
稀疏矩阵
稀疏逼近
能量(信号处理)
模式识别(心理学)
因子(编程语言)
机器学习
数据挖掘
数学
统计
特征向量
物理
计算机安全
量子力学
政治
高斯分布
政治学
法学
程序设计语言
作者
Niall Hurley,Scott Rickard
标识
DOI:10.1109/mlsp.2008.4685455
摘要
Sparsity is a recurrent theme in machine learning and is used to improve performance of algorithms such as non-negative matrix factorization and the LOST algorithm. Our aim in this paper is to compare several commonly-used sparsity measures according to intuitive attributes that a sparsity measure should have. Sparsity of representations of signals in fields such as blind source separation, compression, sampling and signal analysis has proved not just to be useful but a key factor in the success of algorithms in these areas. Intuitively, a sparse representation is one in which a small number of coefficients contain a large proportion of the energy. In this paper we discuss six properties (robin hood, scaling, rising tide, cloning, bill gates and babies) that we believe a sparsity measure should have. The main contribution of this paper is a table which classifies commonly-used sparsity measures based on whether or not they satisfy these six propositions. Only one of these measures satisfies all six: the Gini index.
科研通智能强力驱动
Strongly Powered by AbleSci AI