人工智能
计算机科学
铰链损耗
深度学习
边距(机器学习)
机器学习
卷积神经网络
判别式
代表(政治)
班级(哲学)
模式识别(心理学)
页眉
特征学习
约束(计算机辅助设计)
支持向量机
数学
政治
计算机网络
政治学
法学
几何学
作者
Chen Huang,Yining Li,Chen Change Loy,Xiaoou Tang
标识
DOI:10.1109/cvpr.2016.580
摘要
Data in vision domain often exhibit highly-skewed class distribution, i.e., most data belong to a few majority classes, while the minority classes only contain a scarce amount of instances. To mitigate this issue, contemporary classification methods based on deep convolutional neural network (CNN) typically follow classic strategies such as class re-sampling or cost-sensitive training. In this paper, we conduct extensive and systematic experiments to validate the effectiveness of these classic schemes for representation learning on class-imbalanced data. We further demonstrate that more discriminative deep representation can be learned by enforcing a deep network to maintain both intercluster and inter-class margins. This tighter constraint effectively reduces the class imbalance inherent in the local data neighborhood. We show that the margins can be easily deployed in standard deep learning framework through quintuplet instance sampling and the associated triple-header hinge loss. The representation learned by our approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high-and low-level vision classification tasks that exhibit imbalanced class distribution.
科研通智能强力驱动
Strongly Powered by AbleSci AI