计算机科学
稳健性(进化)
编码
人工智能
图形
地标
探测器
模式识别(心理学)
机器学习
理论计算机科学
生物化学
电信
基因
化学
作者
Zhiyuan Yan,Peng Sun,Yubo Lang,Shuo Du,Shanzhuo Zhang,Wei Wang,Lei Liu
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:1
标识
DOI:10.48550/arxiv.2209.05419
摘要
Existing deepfake detectors face several challenges in achieving robustness and generalization. One of the primary reasons is their limited ability to extract relevant information from forgery videos, especially in the presence of various artifacts such as spatial, frequency, temporal, and landmark mismatches. Current detectors rely on pixel-level features that are easily affected by unknown disturbances or facial landmarks that do not provide sufficient information. Furthermore, most detectors cannot utilize information from multiple domains for detection, leading to limited effectiveness in identifying deepfake videos. To address these limitations, we propose a novel framework, namely Multimodal Graph Learning (MGL) that leverages information from multiple modalities using two GNNs and several multimodal fusion modules. At the frame level, we employ a bi-directional cross-modal transformer and an adaptive gating mechanism to combine the features from the spatial and frequency domains with the geometric-enhanced landmark features captured by a GNN. At the video level, we use a Graph Attention Network (GAT) to represent each frame in a video as a node in a graph and encode temporal information into the edges of the graph to extract temporal inconsistency between frames. Our proposed method aims to effectively identify and utilize distinguishing features for deepfake detection. We evaluate the effectiveness of our method through extensive experiments on widely-used benchmarks and demonstrate that our method outperforms the state-of-the-art detectors in terms of generalization ability and robustness against unknown disturbances.
科研通智能强力驱动
Strongly Powered by AbleSci AI