建议(编程)
概念化
不完美的
自由裁量权
计算机科学
心理学
管理科学
人工智能
工程类
政治学
法学
哲学
语言学
程序设计语言
作者
Max Schemmer,Niklas Kuehl,Carina Benz,Andrea Bartos,Gerhard Satzger
标识
DOI:10.1145/3581641.3584066
摘要
AI advice is becoming increasingly popular, e.g., in investment and medical treatment decisions. As this advice is typically imperfect, decision-makers have to exert discretion as to whether actually follow that advice: they have to “appropriately” rely on correct and turn down incorrect advice. However, current research on appropriate reliance still lacks a common definition as well as an operational measurement concept. Additionally, no in-depth behavioral experiments have been conducted that help understand the factors influencing this behavior. In this paper, we propose Appropriateness of Reliance (AoR) as an underlying, quantifiable two-dimensional measurement concept. We develop a research model that analyzes the effect of providing explanations for AI advice. In an experiment with 200 participants, we demonstrate how these explanations influence the AoR, and, thus, the effectiveness of AI advice. Our work contributes fundamental concepts for the analysis of reliance behavior and the purposeful design of AI advisors.
科研通智能强力驱动
Strongly Powered by AbleSci AI