计算机科学
意外后果
溢出效应
信息处理
情感(语言学)
人工智能
人类智力
黑匣子
数据科学
风险分析(工程)
认知心理学
心理学
业务
微观经济学
法学
经济
沟通
政治学
作者
Kevin Bauer,Moritz von Zahn,Oliver Hinz
出处
期刊:Information Systems Research
[Institute for Operations Research and the Management Sciences]
日期:2023-03-03
卷期号:34 (4): 1582-1602
被引量:36
标识
DOI:10.1287/isre.2023.1199
摘要
Although future regulations increasingly advocate that AI applications must be interpretable by users, we know little about how such explainability can affect human information processing. By conducting two experimental studies, we help to fill this gap. We show that explanations pave the way for AI systems to reshape users' understanding of the world around them. Specifically, state-of-the-art explainability methods evoke mental model adjustments that are subject to confirmation bias, allowing misconceptions and mental errors to persist and even accumulate. Moreover, mental model adjustments create spillover effects that alter users' behavior in related but distinct domains where they do not have access to an AI system. These spillover effects of mental model adjustments risk manipulating user behavior, promoting discriminatory biases, and biasing decision making. The reported findings serve as a warning that the indiscriminate use of modern explainability methods as an isolated measure to address AI systems' black-box problems can lead to unintended, unforeseen problems because it creates a new channel through which AI systems can influence human behavior in various domains.
科研通智能强力驱动
Strongly Powered by AbleSci AI