K-SVD公司
判别式
人工智能
模式识别(心理学)
计算机科学
稀疏逼近
修补
奇异值分解
维数之咒
机器学习
图像(数学)
出处
期刊:Elsevier eBooks
[Elsevier]
日期:2023-01-01
卷期号:: 55-77
被引量:1
标识
DOI:10.1016/b978-0-323-91776-6.00004-x
摘要
Sparse representation (SR) modeling originates from compressed sensing theory with rigorous mathematical error bounds and proofs. The SR of a signal is given by the linear combination of very few columns of “dictionary,” implicitly reducing dimensionality. Training dictionaries so that they represent each class of signals with minimal reconstruction error is called dictionary learning (DL). Method of Optimal Directions (MOD) and K-SVD are some popular DL methods, successfully used in reconstruction-based applications in image processing like image denoising and image inpainting. Other DL algorithms such as Discriminative K-SVD and Label Consistent K-SVD have supervised learning methods based on K-SVD. From our experiments, we observed that the classification performance of these methods is not impressive on Telugu OCR data sets, with many classes and high input dimensionality. Many researchers have used statistical concepts to design dictionaries for classification or recognition. A brief review of some statistical techniques applied in discriminative DL is given here. The main objective of the methods described in this chapter is to improve classification using sparse representation. In this chapter, a hybrid approach is also described, where sparse coefficients of input data are used to train a simple multilayer perceptron with backpropagation. The classification results on the test data are comparable with other computation-intensive methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI