模式
模态(人机交互)
计算机科学
编码(集合论)
建筑
人工智能
情报检索
艺术
社会科学
集合(抽象数据类型)
社会学
视觉艺术
程序设计语言
作者
Shivangi Singhal,Tanisha Pandey,Saksham Mrig,Rajiv Ratn Shah,Ponnurangam Kumaraguru
标识
DOI:10.1145/3487553.3524650
摘要
Recent years have witnessed a massive growth in the proliferation of fake news online. User-generated content is a blend of text and visual information leading to producing different variants of fake news. As a result, researchers started targeting multimodal methods for fake news detection. Existing methods capture high-level information from different modalities and jointly model them to decide. Given multiple input modalities, we hypothesize that not all modalities may be equally responsible for decision-making. Hence, this paper presents a novel architecture that effectively identifies and suppresses information from weaker modalities and extracts relevant information from the strong modality on a per-sample basis. We also establish intra-modality relationship by extracting fine-grained image and text features. We conduct extensive experiments on real-world datasets to show that our approach outperforms the state-of-the-art by an average of 3.05% and 4.525% on accuracy and F1-score, respectively. We also release the code, implementation details, and model checkpoints for the community’s interest.1
科研通智能强力驱动
Strongly Powered by AbleSci AI