计算机科学
人工智能
对抗制
编码器
发电机(电路理论)
过程(计算)
模式识别(心理学)
特征提取
工件(错误)
可视化
机器学习
像素
功率(物理)
物理
量子力学
操作系统
作者
Xin Li,Rongrong Ni,Pengpeng Yang,Zhiqiang Fu,Yao Zhao
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2022-11-04
卷期号:33 (4): 1658-1670
被引量:31
标识
DOI:10.1109/tcsvt.2022.3217950
摘要
Due to the development of facial manipulation technologies, the generated deepfake videos cause a severe trust crisis in society. Existing methods prove that effective extraction of the artifacts introduced during the forgery process is essential for deepfake detection. However, since the features extracted by supervised binary classification contain a lot of artifact-irrelevant information, existing algorithms suffer severe performance degradation in the case of the mismatch between training and testing datasets. To overcome this issue, we propose an Artifacts-Disentangled Adversarial Learning (ADAL) framework to achieve accurate deepfake detection by disentangling the artifacts from irrelevant information. Furthermore, the proposed algorithm provides visual evidence by effectively estimating artifacts. Specifically, Multi-scale Feature Separator (MFS) in the disentanglement generator is designed to precisely transmit the artifact features and optimize the connection between the encoder and decoder. In addition, we design an Artifacts Cycle Consistency Loss (ACCL) which uses the disentangled artifacts to construct new samples and enables pixel-level supervised training for the generator to estimate more accurate artifacts. The symmetric discriminators are paralleled to differentiate the constructed samples from the original images in both fake and real domains, making the adversarial training process more stable. Extensive experiments on existing benchmarks demonstrate that the proposed method outperforms the state-of-the-art approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI