计算机科学
多模式学习
模式
语义学(计算机科学)
人工智能
模态(人机交互)
情态动词
多模态
自然语言处理
社会科学
化学
社会学
万维网
高分子化学
程序设计语言
作者
Zhi Zeng,Mingmin Wu,Guodong Li,Xiang Li,Zhongqiang Huang,Ying Sha
标识
DOI:10.1109/icme55011.2023.00486
摘要
Multimodal fake news detection has become a topical research of fake news detection. Existing models have made great efforts in capturing and fusing multimodal semantics of news for classification. However, they overlooked mitigating inconsistency between different modalities, which may result in learning biased statistical information. Therefore, we propose a mitigating multimodal inconsistency contrastive learning framework (MMICF), which mitigates inconsistency in multi-modal relations for fake news detection. Inspired by various forms of artificial fake news, we summarize two patterns of multimodal inconsistency: local and global inconsistency. To mitigate local inconsistency in multimodal relations, we use a causal-relation reasoning module by causally removing the direct effects of the textual and visual entities. Considering the influence of global inconsistency in multimodal semantics, our contrastive learning framework mitigates the semantic deviation of contrastive text-image objectives, which are constrained into a unified semantic space by a modal unified module. Thus, our MMICF can jointly mitigate local and global inconsistency for further maximally exploiting multimodal consistent semantics for fake news detection. The extensive experimental results show that the MMICF can improve the performance of multimodal fake news detection and provide a novel paradigm for mitigating multimodal inconsistency contrastive learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI