计算机科学
话语
对话框
特征(语言学)
判决
自然语言处理
背景(考古学)
口语
人工智能
帧(网络)
接头(建筑物)
词(群论)
编码(内存)
主管(地质)
语音识别
语言学
古生物学
建筑工程
哲学
万维网
工程类
地质学
地貌学
生物
电信
作者
Peng Yang,Dong Hong Ji,Chengming Ai,Bing Li
标识
DOI:10.1016/j.knosys.2020.106537
摘要
Spoken language understanding (SLU) plays a central role in dialog systems and typically involves two tasks: intent detection and slot filling. Existing joint models improve the performance by introducing richer words, intents and slots semantic features. However, methods that model the explicit interactions between these features have not been further explored. In this paper, we propose a novel joint model based on the position-aware multi-head masked attention mechanism, which explicitly models the interaction between the word encoding feature and the intent–slot features, thereby generating the context features that contribute to slot filling. In addition, we adopt the multi-head attention mechanism to summarize the utterance-level semantic knowledge for intent detection. Experiments show that our model achieves state-of-the-art results and improves the sentence-level semantic frame accuracy, with 2.30% and 0.69% improvement relative to the previous best model on the SNIPS and ATIS datasets, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI