误传
可靠性
背景(考古学)
订单(交换)
社会化媒体
心理学
互联网隐私
计算机科学
社会心理学
政治学
万维网
计算机安全
业务
法学
古生物学
财务
生物
作者
Lin Teng,Yi-Qing Zhang
摘要
With the development of Large Language Models, the capability of AI in generating information has significantly improved, leading to its widespread application in content production. However, the increasing persuasiveness of AI-generated content (AIGC) is also making it increasingly difficult to discern AI-generated misinformation, and AI disclosures are therefore being used to help people counteract the negative effects of misinformation, as well as to increase acceptance of AIGC. In order to assess the effectiveness of AI disclosure applied to popular science articles on social media, we conducted a within-subject experiment (N=419) in the context of the Chinese internet environment. The results indicated that AI disclosure not only made people more likely to believe AI-generated misinformation but also reduced their perceived trust in AI-generated accurate information. This effect diminished as the audience's attitudes toward AI became more negative. For AI-generated misinformation, the moderating effect of negative attitudes towards AI is related to the subject matter of the message. For AI-generated correct information, the negative impact caused by AI disclosure may not exist when the audience's negative attitude towards AI is weak. Furthermore, the level of audience involvement with the information was irrelevant. The study provides new empirical evidence for the debate on the effectiveness of AI disclosure, while also highlighting potential issues in its practical application and discussing the prospects of AI disclosure.
科研通智能强力驱动
Strongly Powered by AbleSci AI