可解释性
问责
透明度(行为)
透视图(图形)
计算机科学
感知
前因(行为心理学)
质量(理念)
钥匙(锁)
情感(语言学)
跟踪(心理语言学)
关系(数据库)
心理学
人工智能
社会心理学
认识论
计算机安全
哲学
神经科学
法学
数据库
沟通
语言学
政治学
标识
DOI:10.1016/j.ijhcs.2020.102551
摘要
Artificial intelligence and algorithmic decision-making processes are increasingly criticized for their black-box nature. Explainable AI approaches to trace human-interpretable decision processes from algorithms have been explored. Yet, little is known about algorithmic explainability from a human factors’ perspective. From the perspective of user interpretability and understandability, this study examines the effect of explainability in AI on user trust and attitudes toward AI. It conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust by testing how they affect user perceived performance of AI-driven services. The results show the dual roles of causability and explainability in terms of its underlying links to trust and subsequent user behaviors. Explanations of why certain news articles are recommended generate users trust whereas causability of to what extent they can understand the explanations affords users emotional confidence. Causability lends the justification for what and how should be explained as it determines the relative importance of the properties of explainability. The results have implications for the inclusion of causability and explanatory cues in AI systems, which help to increase trust and help users to assess the quality of explanations. Causable explainable AI will help people understand the decision-making process of AI algorithms by bringing transparency and accountability into AI systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI