可解释性
杠杆(统计)
计算机科学
图形
机器学习
匹配(统计)
人工智能
数据挖掘
数据科学
理论计算机科学
数学
统计
作者
Guo Hao,Weixin Zeng,Jiuyang Tang,Xiang Zhao
标识
DOI:10.1145/3583780.3614936
摘要
Automatic detection of fake news has received widespread attentions over recent years. A pile of efforts has been put forward to address the problem with high accuracy, while most of them lack convincing explanations, making it difficult to curb the continued spread of false news in real-life cases. Although some models leverage external resources to provide preliminary interpretability, such external signals are not always available. To fill in this gap, in this work, we put forward an interpretable fake news detection model IKA by making use of the historical evidence in the form of graphs. Specifically, we establish both positive and negative evidence graphs by collecting the signals from the historical news, i.e., training data. Then, given a piece of news to be detected, in addition to the common features used for detecting false news, we compare the news and evidence graphs to generate both the matching vector and the related graph evidence for explaining the prediction. We conduct extensive experiments on both Chinese and English datasets. The experiment results show that the detection accuracy of IKA exceeds the state-of-the-art approaches and IKA can provide useful explanations for the prediction results. Besides, IKA is general and can be applied on other models to improve their interpretability.
科研通智能强力驱动
Strongly Powered by AbleSci AI