计算机科学
本体论
图形
词汇
人工智能
模式(遗传算法)
机器学习
理论计算机科学
语言学
认识论
哲学
作者
Meng Zeng,Lifang Wang,Zejun Jiang,Ronghan Li,Xinyu Lu,Zhongtian Hu
标识
DOI:10.1016/j.knosys.2022.110069
摘要
A task-oriented dialogue system (TOD) is an important application of artificial intelligence. In the past few years, works on multi-domain TODs have attracted increased research attention and have seen much progress. A main challenge of such dialogue systems is finding ways to deal with cross-domain slot sharing and dialogue act temporal planning. However, existing studies seldom consider the models’ reasoning ability over the dialogue history; moreover, existing methods overlook the structure information of the ontology schema, which makes them inadequate for handling multi-domain TODs. In this paper, we present a multi-task learning framework equipped with graph attention networks (GATs) to probe the above two challenges. In the method, we explore a dialogue state GAT consisting of a dialogue context subgraph and an ontology schema subgraph to alleviate the cross-domain slot sharing issue. We further construct a GAT-enhanced memory network using the updated nodes in the ontology subgraph to filter out the irrelevant nodes to acquire the needed dialogue states. For dialogue act temporal planning, a similar GAT and corresponding memory network are proposed to obtain fine-grained dialogue act representation. Moreover, we design an entity detection task to improve the capability of soft gate, which determines whether the generated tokens are from the vocabulary or knowledge base. In the training phase, four training tasks are combined and optimized simultaneously to facilitate the response generation process. The experimental results for automatic and human evaluations show that the proposed model achieves superior results compared to the state-of-the-art models on the MultiWOZ 2.0 and MultiWOZ 2.1 datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI