误传
可靠性
众包
来源可信度
政治
考试(生物学)
差异(会计)
社会心理学
心理学
计算机科学
政治学
法学
万维网
计算机安全
经济
古生物学
会计
生物
作者
Myojung Chung,Won-Ki Moon,S. Mo Jang
标识
DOI:10.1080/21670811.2023.2254820
摘要
AbstractWhile fact-checking has received much attention as a tool to fight misinformation online, fact-checking efforts have yielded limited success in combating political misinformation due to partisans’ biased information processing. The efficacy of fact-checking often decreases, if not backfires, when the fact-checking messages contradict individual audiences’ political stance. To explore ways to minimize such politically biased processing of fact-checking messages, an online experiment (N = 645) examined how different source labels of fact-checking messages (human experts vs. AI vs. crowdsourcing vs. human experts-AI hybrid) influence partisans’ processing of fact-checking messages. Results showed that AI and crowdsourcing source labels significantly reduced motivated reasoning in evaluating the credibility of fact-checking messages whereas the partisan bias remained evident for the human experts and human experts-AI hybrid source labels.Keywords: AIartificial intelligencefact-checkingmisinformationmessage credibilityfake newsmotivated reasoningsocial media Disclosure StatementNo potential conflict of interest was reported by the author(s).Notes1 A series of analysis of variance (ANOVA) and Chi-square tests found no significant demographic differences between conditions (p = .099 for age; p = .522 for gender; p = .417 for income; p = .364 for education; p = .549 for political partisanship; p = .153 for political ideology, p = .493 for frequency of social media use). Thus, randomization was deemed successful.2 To further explore differences in message credibility across the four fact-checking source labels, one-way ANOVA and a Bonferroni post hoc test were conducted. The results showed that there are significant differences across the four source labels in shaping message credibility, F(3, 641) = 2.82, p = .038, Cohen’s d = 0.23. Those in the AI condition reported the highest message credibility (M = 3.89, SD = 0.79), followed by the human experts condition (M = 3.86, SD = 0.89) and the human experts-AI condition (M = 3.84, SD = 0.81). The crowdsourcing condition showed the lowest message credibility (M = 3.66, SD = 0.81). The post hoc test indicated that the AI source label induced significantly higher message credibility than the crowdsourcing source label (p = .042). However, no significant differences were found among other source labels.
科研通智能强力驱动
Strongly Powered by AbleSci AI