造谣
开源
计算机科学
万维网
天体生物学
社会化媒体
物理
程序设计语言
软件
标识
DOI:10.1145/3643491.3665282
摘要
In the summer of 2023, the Writers Guild of America embarked on what would become one of its longest strikes in history. Concurrently, the early stirrings of the presidential campaign saw several ads circulating with convincingly altered video and audio clips of political rivals. Though at first glance unrelated, these events share a common thread: the issue of deepfakes and their potential for spreading disinformation and erasing creative jobs. While deepfakes stirred a sizable debate in both cases, the scale and accessibility of their threat were unclear. What were the limiting factors for using this technology? Was it exclusive to Hollywood studios with large training sets, or was it accessible to an average programmer? We conducted a set of experiments to answer these questions. In particular, set out to create a photorealistic deepfake of a real news anchor using only open-source tools and models, limited data from the internet, and a consumer laptop. Over a few weeks—as a team comprising one first-year computer science student and his advisor—we accomplished this to the extent that our deepfake opened a primetime CNN show. Contextualizing our findings in the landscape of disinformation, this talk details the development of our deepfake pipeline from start to end. It offers a discussion highlighting this technology's current ability to deceive and shake industries and suggests potential solutions moving forward.
科研通智能强力驱动
Strongly Powered by AbleSci AI