主成分分析
人工智能
模式识别(心理学)
计算机科学
分歧(语言学)
稳健性(进化)
稀疏矩阵
稀疏逼近
数学
计算机视觉
物理
基因
哲学
量子力学
生物化学
高斯分布
化学
语言学
作者
Aref Miri Rekavandi,Abd-Krim Seghouane,Robin J. Evans
标识
DOI:10.1109/tip.2024.3403493
摘要
In this paper, novel robust principal component analysis (RPCA) methods are proposed to exploit the local structure of datasets. The proposed methods are derived by minimizing the α-divergence between the sample distribution and the Gaussian density model. The α-divergence is used in different frameworks to represent variants of RPCA approaches including orthogonal, non-orthogonal, and sparse methods. We show that the classical PCA is a special case of our proposed methods where the α-divergence is reduced to the Kullback-Leibler (KL) divergence. It is shown in simulations that the proposed approaches recover the underlying principal components (PCs) by down-weighting the importance of structured and unstructured outliers. Furthermore, using simulated data, it is shown that the proposed methods can be applied to fMRI signal recovery and Foreground-Background (FB) separation in video analysis. Results on real world problems of FB separation as well as image reconstruction are also provided.
科研通智能强力驱动
Strongly Powered by AbleSci AI