计算机科学
计算机视觉
验光服务
人工智能
数据科学
医学
作者
Z Q Wang,Zhongyao Cheng,Jiang Xiong,Xun Xu,Tianrui Li,Bharadwaj Veeravalli,Xulei Yang
出处
期刊:Cornell University - arXiv
日期:2024-05-14
标识
DOI:10.48550/arxiv.2405.08463
摘要
In recent years, the rapid advancement of deepfake technology has revolutionized content creation, lowering forgery costs while elevating quality. However, this progress brings forth pressing concerns such as infringements on individual rights, national security threats, and risks to public safety. To counter these challenges, various detection methodologies have emerged, with Vision Transformer (ViT)-based approaches showcasing superior performance in generality and efficiency. This survey presents a timely overview of ViT-based deepfake detection models, categorized into standalone, sequential, and parallel architectures. Furthermore, it succinctly delineates the structure and characteristics of each model. By analyzing existing research and addressing future directions, this survey aims to equip researchers with a nuanced understanding of ViT's pivotal role in deepfake detection, serving as a valuable reference for both academic and practical pursuits in this domain.
科研通智能强力驱动
Strongly Powered by AbleSci AI