误传
危害
转化式学习
透视图(图形)
分类
互联网隐私
计算机科学
心理学
政治学
计算机安全
社会心理学
人工智能
发展心理学
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:8
标识
DOI:10.48550/arxiv.2309.13788
摘要
The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and validate the potential real-world methods for generating misinformation with LLMs. Then, through extensive empirical investigation, we discover that LLM-generated misinformation can be harder to detect for humans and detectors compared to human-written misinformation with the same semantics, which suggests it can have more deceptive styles and potentially cause more harm. We also discuss the implications of our discovery on combating misinformation in the age of LLMs and the countermeasures.
科研通智能强力驱动
Strongly Powered by AbleSci AI