可靠性(半导体)
任务(项目管理)
动作(物理)
匹配(统计)
顺从(心理学)
毒物控制
心理学
计算机科学
应用心理学
社会心理学
工程类
统计
医学
数学
环境卫生
功率(物理)
物理
系统工程
量子力学
作者
H. W. Elder,Casey Canfield,Daniel B. Shank,Tobias Rieger,Casey Hines
出处
期刊:Human Factors
[SAGE]
日期:2022-05-21
卷期号:66 (2): 348-362
被引量:7
标识
DOI:10.1177/00187208221100691
摘要
Objective This study manipulates the presence and reliability of AI recommendations for risky decisions to measure the effect on task performance, behavioral consequences of trust, and deviation from a probability matching collaborative decision-making model. Background Although AI decision support improves performance, people tend to underutilize AI recommendations, particularly when outcomes are uncertain. As AI reliability increases, task performance improves, largely due to higher rates of compliance (following action recommendations) and reliance (following no-action recommendations). Methods In a between-subject design, participants were assigned to a high reliability AI, low reliability AI, or a control condition. Participants decided whether to bet that their team would win in a series of basketball games tying compensation to performance. We evaluated task performance (in accuracy and signal detection terms) and the behavioral consequences of trust (via compliance and reliance). Results AI recommendations improved task performance, had limited impact on risk-taking behavior, and were under-valued by participants. Accuracy, sensitivity ( d’), and reliance increased in the high reliability AI condition, but there was no effect on response bias ( c) or compliance. Participant behavior was only consistent with a probability matching model for compliance in the low reliability condition. Conclusion In a pay-off structure that incentivized risk-taking, the primary value of the AI recommendations was in determining when to perform no action (i.e., pass on bets). Application In risky contexts, designers need to consider whether action or no-action recommendations will be more influential to design appropriate interventions.
科研通智能强力驱动
Strongly Powered by AbleSci AI