透明度(行为)
建议(编程)
意外后果
背景(考古学)
计算机科学
心理学
社会心理学
互联网隐私
计算机安全
政治学
生物
古生物学
程序设计语言
法学
作者
Anuschka Schmitt,Thiemo Wambsganß,Matthias Söllner,Andreas Janson
摘要
Beyond AI-based systems’ potential to augment decision-making, reduce organizational resources, and counter human biases, unintended consequences of such systems have been largely neglected so far. Researchers are undecided on whether erroneous advice acts as an impediment to system use or is blindly relied upon. As part of an experimental study, we turn towards the impact of incorrect system advice and how to design for failure-prone AI. In an experiment with 156 subjects we find that, although incorrect algorithmic advice is trusted less, users adapt their answers to a system’s incorrect recommendations. While transparency on a system’s accuracy levels fosters trust and reliance in the context of incorrect advice, an opposite effect is found for users exposed to correct advice. Our findings point towards a paradoxical gap between stated trust and actual behavior. Furthermore, transparency mechanisms should be deployed with caution as their effectiveness is intertwined with system performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI