计算机科学
神经编码
算法
可微函数
近似推理
计算
推论
坐标下降
编码(集合论)
二次方程
稀疏逼近
近似算法
编码(社会科学)
近似误差
人工智能
模式识别(心理学)
数学
数学分析
几何学
集合(抽象数据类型)
程序设计语言
作者
Karol Gregor,Yann LeCun
出处
期刊:International Conference on Machine Learning
日期:2010-06-21
卷期号:: 399-406
被引量:494
摘要
In Sparse Coding (SC), input vectors are reconstructed using a sparse linear combination of basis vectors. SC has become a popular method for extracting features from data. For a given input, SC minimizes a quadratic reconstruction error with an L1 penalty term on the code. The process is often too slow for applications such as real-time pattern recognition. We proposed two versions of a very fast algorithm that produces approximate estimates of the sparse code that can be used to compute good visual features, or to initialize exact iterative algorithms. The main idea is to train a non-linear, feed-forward predictor with a specific architecture and a fixed depth to produce the best possible approximation of the sparse code. A version of the method, which can be seen as a trainable version of Li and Osher's coordinate descent method, is shown to produce approximate solutions with 10 times less computation than Li and Os-her's for the same approximation error. Unlike previous proposals for sparse code predictors, the system allows a kind of approximate explaining away to take place during inference. The resulting predictor is differentiable and can be included into globally-trained recognition systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI