线性判别分析
离群值
判别式
规范(哲学)
数学
公制(单位)
人工智能
计算机科学
矩阵范数
模式识别(心理学)
算法
特征向量
法学
经济
物理
量子力学
运营管理
政治学
作者
Qiaolin Ye,Zechao Li,Liyong Fu,Zhao Zhang,Wankou Yang,Guowei Yang
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2019-12-01
卷期号:30 (12): 3818-3832
被引量:101
标识
DOI:10.1109/tnnls.2019.2944869
摘要
Of late, there are many studies on the robust discriminant analysis, which adopt L1-norm as the distance metric, but their results are not robust enough to gain universal acceptance. To overcome this problem, the authors of this article present a nonpeaked discriminant analysis (NPDA) technique, in which cutting L1-norm is adopted as the distance metric. As this kind of norm can better eliminate heavy outliers in learning models, the proposed algorithm is expected to be stronger in performing feature extraction tasks for data representation than the existing robust discriminant analysis techniques, which are based on the L1-norm distance metric. The authors also present a comprehensive analysis to show that cutting L1-norm distance can be computed equally well, using the difference between two special convex functions. Against this background, an efficient iterative algorithm is designed for the optimization of the proposed objective. Theoretical proofs on the convergence of the algorithm are also presented. Theoretical insights and effectiveness of the proposed method are validated by experimental tests on several real data sets.
科研通智能强力驱动
Strongly Powered by AbleSci AI