反事实思维
背景(考古学)
计算机科学
推论
代表(政治)
人工智能
话语
因果推理
排名(信息检索)
借记
自然语言处理
机器学习
认知科学
心理学
社会心理学
认知
古生物学
神经科学
政治
政治学
法学
生物
作者
Xu Wang,Hainan Zhang,Shuai Zhao,Hongshen Chen,Zhuoye Ding,Zhiguo Wan,Bo Cheng,Yanyan Lan
出处
期刊:IEEE/ACM transactions on audio, speech, and language processing
[Institute of Electrical and Electronics Engineers]
日期:2023-12-15
卷期号:32: 1125-1132
被引量:1
标识
DOI:10.1109/taslp.2023.3343608
摘要
In the multi-turn dialogue reasoning task, existing models conduct word-level interaction on the entire context to gather reasoning evidence, which aims to select the logically correct one from the candidate response options. Observing the fact that the salient reasoning evidence usually comes from certain snippets of the whole dialogue session, one promising study direction is to explicitly identify the candidate reasoning contexts correlated with the dialogue reasoning options, called option-related contexts, and then make logical inference among them. However, such option-related contexts are stained with noisy information. As a result, existing models may reason unfairly with biased context and select wrong options. To tackle the context bias problem, in this paper, we propose a novel CounterFactual learning framework for Dialogue Reasoning, named CF-DialReas, which mitigates the bias information by subtracting the counterfactual representation from the total causal representation. Specifically, we consider two scenarios, i.e., factual dialogue reasoning where the whole context is available to estimate the total causal representation, and the counterfactual dialogue reasoning, which firstly utilizes three different types of utterance selectors to select option- unrelated context, and then only the option- unrelated context is available to guess the counterfactual representation. Experimental results on two public dialogue reasoning datasets show that the model with our mechanism can obtain higher ranking measures, validating the effectiveness of counterfactual learning of CF-DialReas. Further analysis on the generality of CF-DialReas shows that our counterfactual learning mechanism is generally effective to the widely-used models.
科研通智能强力驱动
Strongly Powered by AbleSci AI