一致性(知识库)
计算机科学
人工智能
过度拟合
自然语言处理
水准点(测量)
机器学习
机器翻译
资源(消歧)
一般化
深度学习
人工神经网络
数学
计算机网络
数学分析
大地测量学
地理
作者
Xiaobo Liang,Robert Mao,Lijun Wu,Juntao Li,Min Zhang,Qing Li
出处
期刊:IEEE/ACM transactions on audio, speech, and language processing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:32: 189-199
标识
DOI:10.1109/taslp.2023.3325970
摘要
Natural language processing (NLP) has recently shown significant progress in rich-resource scenarios. However, it is much less effective for low-resource scenarios due to the model easily overfitting to limited training data and generalizing poorly on testing data. In recent years, consistency training has been widely adopted and shown great promise in deep learning, but still remains unexplored in low-resource settings. In this work, we propose DM-CT, a framework that incorporates both data-level and model-level consistency training as well as advanced data augmentation techniques for low-resource scenarios. Concretely, the input data is first augmented, and the output distributions of different sub-models generated by model variance are forced to be consistent (model-level consistency). Meanwhile, the predictions of the original input and the augmented one are also constrained to be consistent (data-level consistency). Experiments on different low-resource NLP tasks, including neural machine translation (4 IWSLT14 translation tasks, multilingual translation task, and WMT16 Romanian $\rightarrow$ English translation), natural language understanding tasks (GLUE benchmark), and named entity recognition (Conll2003 and WikiGold), well demonstrate the superiority of DM-CT by obtaining significant and consistent performance improvements.
科研通智能强力驱动
Strongly Powered by AbleSci AI