可靠性
新闻
透明度(行为)
规范性
计算机科学
背景(考古学)
调解
过程(计算)
感知
公民新闻
启发式
心理学
关系(数据库)
修辞
社会心理学
社会学
人工智能
政治学
法学
语言学
万维网
哲学
古生物学
操作系统
神经科学
计算机安全
社会科学
媒体研究
数据库
生物
标识
DOI:10.1080/1461670x.2021.1916984
摘要
As the use of algorithms has emerged in journalism, analytic/algorithmic journalism (AJ) has seen rapid development in major news organizations. Despite this surging trend, little is known about the role and the effects of explainability on the process by which people perceive and make sense of trust in an algorithm-driven AI system. While AJ has greatly benefited from increasingly sophisticated algorithm technologies, AJ suffers from a lack of transparency and understandability for readers. We identify explainability as a heuristic cue of an algorithm and conceptualizes it in relation to trust by testing how it affects user emotion with AJ. Our experiments show that the addition of interpretable explanations leads to enhanced trust in the context of AJ and readers' trust hinges upon the perceived normative values that are used to assess algorithmic qualities. Explanations of why certain news articles are recommended give users emotional assurance and affirmation. Mediation analyses show that explanatory cues play a mediating role between trust and performance expectancy. The results have implications for the inclusion of explanatory cues in AJ, which help to increase credibility and help users to assess AJ value.
科研通智能强力驱动
Strongly Powered by AbleSci AI