计算机科学
安全性令牌
话语
语言模型
自然语言理解
人工智能
适配器(计算)
判决
F1得分
水准点(测量)
自然语言处理
机器学习
自然语言
计算机安全
大地测量学
地理
操作系统
作者
Yu Guo,Zhilong Xie,Xingyan Chen,Huangen Chen,Leilei Wang,Huaming Du,Shaopeng Wei,Yu Zhao,Qing Li,Gang Wu
出处
期刊:Neurocomputing
[Elsevier]
日期:2024-04-23
卷期号:591: 127725-127725
被引量:2
标识
DOI:10.1016/j.neucom.2024.127725
摘要
Natural language understanding (NLU) has two core tasks: intent classification and slot filling. The success of pre-training language models resulted in a significant breakthrough in the two tasks. The architecture based on autoencoding (BERT-based model) can optimize the two tasks jointly. However, we note that BERT-based models convert each complex token into multiple sub-tokens by the Wordpiece algorithm, which generates an out-of-alignment between the lengths of the tokens and the labels. This leads to BERT-based models not performing well in label prediction, which limits the improvement of model performance. Many existing models can address this issue, but some hidden semantic information is discarded during the fine-tuning process. We addressed the problem by introducing a novel joint method on top of BERT. This method explicitly models multiple sub-token features after the Wordpiece tokenization, thereby contributing to both tasks. Our proposed method effectively extracts contextual features from complex tokens using the Sub-words Attention Adapter (SAA), preserving overall utterance information. Additionally, we propose an Intent Attention Adapter (IAA) to acquire comprehensive sentence features, assisting users in predicting intent. Experimental results confirm that our proposed model exhibits significant improvements on two public benchmark datasets. Specifically, the slot-filling F1 score improves from 96.5 to 98.2 (an absolute increase of 1.7%) on the Airline Travel Information Systems (ATIS) dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI