计算机科学
文本生成
自动汇总
自然语言生成
知识库
对话
人工智能
机器翻译
知识图
文本处理
自然语言处理
自然语言
情报检索
数据科学
语言学
哲学
作者
Wen-Hao Yu,Chenguang Zhu,Zaitang Li,Zhiting Hu,Qingyun Wang,Hengsong Ji,Meng Jiang
出处
期刊:ACM Computing Surveys
[Association for Computing Machinery]
日期:2022-01-31
卷期号:54 (11s): 1-38
被引量:33
摘要
The goal of text-to-text generation is to make machines express like a human in many applications such as conversation, summarization, and translation. It is one of the most important yet challenging tasks in natural language processing (NLP). Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text. However, the input text alone often provides limited knowledge to generate the desired output, so the performance of text generation is still far from satisfaction in many real-world scenarios. To address this issue, researchers have considered incorporating (i) internal knowledge embedded in the input text and (ii) external knowledge from outside sources such as knowledge base and knowledge graph into the text generation system. This research topic is known as knowledge-enhanced text generation . In this survey, we present a comprehensive review of the research on this topic over the past five years. The main content includes two parts: (i) general methods and architectures for integrating knowledge into text generation; (ii) specific techniques and applications according to different forms of knowledge data. This survey can have broad audiences, researchers and practitioners, in academia and industry.
科研通智能强力驱动
Strongly Powered by AbleSci AI