计算机科学
言语推理
推理系统
视觉推理
人工智能
演绎推理
定性推理
自然语言处理
代表(政治)
基于模型的推理
机会主义推理
适应性推理
空格(标点符号)
知识表示与推理
语义学(计算机科学)
构造(python库)
程序设计语言
认知
心理学
神经科学
政治
政治学
法学
操作系统
作者
Siyuan Wang,Zhongyu Wei,Jiarong Xu,Taishan Li,Zhihao Fan
出处
期刊:IEEE/ACM transactions on audio, speech, and language processing
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:: 1-11
标识
DOI:10.1109/taslp.2023.3325973
摘要
Recent pre-trained language models (PLMs) equipped with foundation reasoning skills have shown remarkable performance on downstream complex tasks. However, the significant structure reasoning skill has been rarely studied, which involves modeling implicit structure information within the text and performing explicit logical reasoning over them to deduce the conclusion. This paper proposes a unified learning framework that combines explicit structure reasoning and language pre-training to endow PLMs with the structure reasoning skill. It first identifies several elementary structures within contexts to construct structured queries and performs step-by-step reasoning along the queries to identify the answer entity. The fusion of textual semantics and structure reasoning is achieved by using contextual representations learned by PLMs to initialize the representation space of structures, and performing stepwise reasoning on this semantic representation space. Experimental results on four datasets demonstrate that the proposed model achieves significant improvements in complex reasoning tasks involving diverse structures, and shows transferability to downstream tasks with limited training data and effectiveness for complex reasoning of KGs modality.
科研通智能强力驱动
Strongly Powered by AbleSci AI