计算机科学
人工智能
可扩展性
机器学习
答疑
水准点(测量)
语言模型
自然语言处理
知识图
图形
理论计算机科学
大地测量学
数据库
地理
作者
Nan Hu,Yike Wu,Guilin Qi,Dehai Min,Jiaoyan Chen,Jeff Z. Pan,Zafar Ali
出处
期刊:World Wide Web
[Springer Nature]
日期:2023-05-17
卷期号:26 (5): 2855-2886
标识
DOI:10.1007/s11280-023-01166-y
摘要
Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP). It is now the consensus of the NLP community to adopt PLMs as the backbone for downstream tasks. In recent works on knowledge graph question answering (KGQA), BERT or its variants have become necessary in their KGQA models. However, there is still a lack of comprehensive research and comparison of the performance of different PLMs in KGQA. To this end, we summarize two basic KGQA frameworks based on PLMs without additional neural network modules to compare the performance of nine PLMs in terms of accuracy and efficiency. In addition, we present three benchmarks for larger-scale KGs based on the popular SimpleQuestions benchmark to investigate the scalability of PLMs. We carefully analyze the results of all PLMs-based KGQA basic frameworks on these benchmarks and two other popular datasets, WebQuestionSP and FreebaseQA, and find that knowledge distillation techniques and knowledge enhancement methods in PLMs are promising for KGQA. Furthermore, we test ChatGPT ( https://chat.openai.com/ ), which has drawn a great deal of attention in the NLP community, demonstrating its impressive capabilities and limitations in zero-shot KGQA. We have released the code and benchmarks to promote the use of PLMs on KGQA ( https://github.com/aannonymouuss/PLMs-in-Practical-KBQA ).
科研通智能强力驱动
Strongly Powered by AbleSci AI