人工智能
判别式
模式识别(心理学)
计算机科学
线性判别分析
离群值
支持向量机
核(代数)
稳健性(进化)
机器学习
计算机视觉
数学
生物化学
化学
组合数学
基因
作者
Dong Huang,Ricardo Cabral,Fernando Torre
标识
DOI:10.1109/tpami.2015.2448091
摘要
Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR.
科研通智能强力驱动
Strongly Powered by AbleSci AI