Generative Artificial Intelligence and Misinformation Acceptance: An Experimental Test of the Effect of Forewarning About Artificial Intelligence Hallucination
Generative artificial intelligence (AI) tools could create statements that are seemingly plausible but factually incorrect. This is referred to as AI hallucination, which can contribute to the generation and dissemination of misinformation. Thus, the present study examines whether forewarning about AI hallucination could reduce individuals' acceptance of AI-generated misinformation. An online experiment with 208 Korean adults demonstrated that AI hallucination forewarning reduced misinformation acceptance (p = 0.001, Cohen's d = 0.45) while forewarning did not reduce acceptance of true information (p = 0.91). In addition, the effect of AI hallucination forewarning on misinformation acceptance was moderated by preference for effortful thinking (p < 0.01) such that forewarning decreased misinformation acceptance when preference for effortful thinking was high (vs. low).