计算机科学
特征(语言学)
模式
自编码
模态(人机交互)
相似性(几何)
编码器
人工智能
保险丝(电气)
骨料(复合)
假新闻
特征学习
深度学习
情态动词
机器学习
图像(数学)
工程类
哲学
语言学
材料科学
互联网隐私
复合材料
社会学
高分子化学
电气工程
操作系统
社会科学
化学
作者
Yangming Zhou,Yuzhou Yang,Qichao Ying,Zhenxing Qian,Xinpeng Zhang
标识
DOI:10.1109/icme55011.2023.00480
摘要
Fake news detection (FND) has attracted much research interests in social forensics. Many existing approaches introduce tailored attention mechanisms to fuse unimodal features. However, they ignore the impact of cross-modal similarity between modalities. Meanwhile, the potential of pretrained multimodal feature learning models in FND has not been well exploited. This paper proposes an FND-CLIP framework, i.e., a multimodal Fake News Detection network based on Contrastive Language-Image Pretraining (CLIP). FND-CLIP extracts the deep representations together from news using two unimodal encoders and two pair-wise CLIP encoders. The CLIP-generated multimodal features are weighted by CLIP similarity of the two modalities. We also introduce a modality-wise attention module to aggregate the features. Extensive experiments are conducted and the results indicate that the proposed framework has a better capability in mining crucial features for fake news detection. The proposed FND-CLIP can achieve better performances than previous works on three typical fake news datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI