计算机科学
谣言
情态动词
人工智能
公制(单位)
模式
深度学习
多模态
成对比较
模态(人机交互)
多模式学习
机器学习
光学(聚焦)
自然语言处理
工程类
万维网
社会科学
化学
运营管理
公共关系
物理
光学
社会学
政治学
高分子化学
作者
Liwen Peng,Songlei Jian,Dongsheng Li,Siqi Shen
标识
DOI:10.1109/icassp49357.2023.10096188
摘要
Multimodal rumor detection aims at detecting rumors using information from textual and visual modalities. The most critical difficulty in multimodal rumor detection lies in capturing both the intra-modal and inter-modal relationships from multimodal data. However, existing methods mainly focus on the multimodal fusion process while paying little attention to the intra-modal relationships. To address these limitations, we propose a multimodal rumor detection method with deep metric learning (MRML) to effectively extract multimodal relationships of news for detecting rumors. Specifically, we design the metric-based triplet learning to extract the intra-modal relationships between rumors and non-rumors in every modality and the contrastive pairwise learning to capture the inter-modal relationships across multimodal. Extensive experiments on two real-world multimodal datasets show the superior performance of our rumor detection method.
科研通智能强力驱动
Strongly Powered by AbleSci AI