过度拟合
计算机科学
人工智能
平滑的
机器学习
多标签分类
水准点(测量)
相似性(几何)
代表(政治)
混乱
关系(数据库)
自然语言处理
模式识别(心理学)
数据挖掘
人工神经网络
法学
地理
图像(数学)
政治
计算机视觉
政治学
心理学
大地测量学
精神分析
作者
Biyang Guo,Songqiao Han,Han Xiao,Hailiang Huang,Ting Lu
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence
[Association for the Advancement of Artificial Intelligence (AAAI)]
日期:2021-05-18
卷期号:35 (14): 12929-12936
被引量:54
标识
DOI:10.1609/aaai.v35i14.17529
摘要
Representing the true label as one-hot vector is the common practice in training text classification models. However, the one-hot representation may not adequately reflect the relation between the instance and labels, as labels are often not completely independent and instances may relate to multiple labels in practice. The inadequate one-hot representations tend to train the model to be over-confident, which may result in arbitrary prediction and model overfitting, especially for confused datasets (datasets with very similar labels) or noisy datasets (datasets with labeling errors). While training models with label smoothing can ease this problem in some degree, it still fails to capture the realistic relation among labels. In this paper, we propose a novel Label Confusion Model (LCM) as an enhancement component to current popular text classification models. LCM can learn label confusion to capture semantic overlap among labels by calculating the similarity between instance and labels during training and generate a better label distribution to replace the original one-hot label vector, thus improving the final classification performance. Extensive experiments on five text classification benchmark datasets reveal the effectiveness of LCM for several widely used deep learning classification models. Further experiments also verify that LCM is especially helpful for confused or noisy datasets and superior to the label smoothing method.
科研通智能强力驱动
Strongly Powered by AbleSci AI