怀疑论
自动化
人工智能
计算机科学
心理学
工程类
认识论
哲学
机械工程
作者
Sacha Altay,Fabrizio Gilardi
标识
DOI:10.31234/osf.io/83k9r_v1
摘要
The rise of generative AI tools has sparked debates about the labeling of AI-generated content. Yet, the impact of such labels remains uncertain. In two pre-registered online experiments among US and UK participants (N = 4,976), we show that while participants did not equate "AI-generated" with "False", labeling headlines as AI-generated lowered their perceived accuracy and participants' willingness to share them, regardless of whether the headlines were true or false, and created by humans or AI. The impact of labeling headlines as AI-generated was three times smaller than labeling them as false. This AI aversion is due to expectations that headlines labeled as AI-generated have been entirely written by AI with no human supervision. These findings suggest that the labeling of AI-generated content should be approached cautiously to avoid unintended negative effects on harmless or even beneficial AI-generated content, and that effective deployment of labels requires transparency regarding their meaning.
科研通智能强力驱动
Strongly Powered by AbleSci AI