算法
平方(代数)
核(代数)
数学
计算机科学
组合数学
几何学
作者
Badong Chen,Songlin Zhao,Pingping Zhu,José C. Prı́ncipe
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2011-12-21
卷期号:23 (1): 22-32
被引量:402
标识
DOI:10.1109/tnnls.2011.2178446
摘要
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the “redundant” data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI