计算机科学
人工智能
机器学习
一般化
特征(语言学)
边距(机器学习)
罗伊特
相似性(几何)
特征向量
模式识别(心理学)
任务(项目管理)
数学
图像(数学)
数学分析
哲学
语言学
管理
经济
作者
Jinhao Du,Guibo Luo,Yuesheng Zhu,Zhiqiang Bai
标识
DOI:10.1109/ictai59109.2023.00091
摘要
Real-world data often exhibit long tail distributions with heavy class imbalance, where the majority (head) classes can dominate the training process and alter the decision boundaries of the minority (tail) classes, leading to biased feature spaces. Recently, researchers have investigated the potential of contrastive learning for long-tailed visual recognition and introduced a class-balanced factor in loss function engineering. Although this method can help improve performance, it harms head performance due to undesirable bias, resulting in poor separability of minority samples in feature spaces. In this paper, we target the logit adjustment and propose balanced student-t von Mises-Fisher (bt-vMF) contrastive learning, encouraging a large margin between the head and tail classes and providing better generalization. In addition, the network trained on long-tailed datasets suffers from great uncertainty in predictions. To alleviate this issue, we build mutual supervision among multiple experts via proposed bilateral collaborative learning (BCL), in which the collaboration is conducted from both bt-vMF similarity and relationship distillation. Simply put, our designs focus on the generalization power of a single expert and the knowledge transfer among multiple experts to alleviate the biased feature space and uncertainty in long-tailed learning, respectively. Experiments on multiple datasets show that our method achieves competitive performance on long-tailed visual recognition task.
科研通智能强力驱动
Strongly Powered by AbleSci AI