计算机科学
答疑
知识库
一般化
人工智能
任务(项目管理)
自然语言处理
基础(拓扑)
训练集
自然语言
机器学习
情报检索
数学
数学分析
经济
管理
作者
Aiting Liu,Ziqi Huang,Hengtong Lu,Xiaojie Wang,Caixia Yuan
标识
DOI:10.1007/978-3-030-32381-3_7
摘要
Knowledge base question answering aims to answer natural language questions by querying external knowledge base, which has been widely applied to many real-world systems. Most existing methods are template-based or training BiLSTMs or CNNs on the task-specific dataset. However, the hand-crafted templates are time-consuming to design as well as highly formalist without generalization ability. At the same time, BiLSTMs and CNNs require large-scale training data which is unpractical in most cases. To solve these problems, we utilize the prevailing pre-trained BERT model which leverages prior linguistic knowledge to obtain deep contextualized representations. Experimental results demonstrate that our model can achieve the state-of-the-art performance on the NLPCC- ICCPOL 2016 KBQA dataset, with an 84.12% averaged F1 score(1.65% absolute improvement).
科研通智能强力驱动
Strongly Powered by AbleSci AI