误传
规范性
心理干预
社会化媒体
社会心理学
身份(音乐)
规范(哲学)
干预(咨询)
背景(考古学)
社会认同理论
心理学
互联网隐私
政治学
计算机科学
社会团体
法学
古生物学
物理
精神科
生物
声学
作者
Jay J. Van Bavel,Ali Javeed,Diána Hughes,Kobi Hackenburg,Manos Tsakiris,Óscar Vilarroya,Jay Joseph Van Bavel
标识
DOI:10.1098/rstb.2023.0040
摘要
Interventions to counter misinformation are often less effective for polarizing content on social media platforms. We sought to overcome this limitation by testing an identity-based intervention, which aims to promote accuracy by incorporating normative cues directly into the social media user interface. Across three pre-registered experiments in the US ( N = 1709) and UK ( N = 804), we found that crowdsourcing accuracy judgements by adding a Misleading count (next to the Like count) reduced participants' reported likelihood to share inaccurate information about partisan issues by 25% (compared with a control condition). The Misleading count was also more effective when it reflected in-group norms (from fellow Democrats/Republicans) compared with the norms of general users, though this effect was absent in a less politically polarized context (UK). Moreover, the normative intervention was roughly five times as effective as another popular misinformation intervention (i.e. the accuracy nudge reduced sharing misinformation by 5%). Extreme partisanship did not undermine the effectiveness of the intervention. Our results suggest that identity-based interventions based on the science of social norms can be more effective than identity-neutral alternatives to counter partisan misinformation in politically polarized contexts (e.g. the US). This article is part of the theme issue ‘Social norm change: drivers and consequences’.
科研通智能强力驱动
Strongly Powered by AbleSci AI