医学诊断
建议(编程)
计算机科学
认知
心理学
认知偏差
决策支持系统
人工智能
过程(计算)
医疗决策
医学
医疗急救
精神科
操作系统
病理
程序设计语言
作者
Ekaterina Jussupow,Kai Spohrer,Armin Heinzl,Joshua Gawlitza
出处
期刊:Information Systems Research
[Institute for Operations Research and the Management Sciences]
日期:2021-02-26
卷期号:32 (3): 713-735
被引量:162
标识
DOI:10.1287/isre.2020.0980
摘要
Systems based on artificial intelligence (AI) increasingly support physicians in diagnostic decisions, but they are not without errors and biases. Failure to detect those may result in wrong diagnoses and medical errors. Compared with rule-based systems, however, these systems are less transparent and their errors less predictable. Thus, it is difficult, yet critical, for physicians to carefully evaluate AI advice. This study uncovers the cognitive challenges that medical decision makers face when they receive potentially incorrect advice from AI-based diagnosis systems and must decide whether to follow or reject it. In experiments with 68 novice and 12 experienced physicians, novice physicians with and without clinical experience as well as experienced radiologists made more inaccurate diagnosis decisions when provided with incorrect AI advice than without advice at all. We elicit five decision-making patterns and show that wrong diagnostic decisions often result from shortcomings in utilizing metacognitions related to decision makers’ own reasoning (self-monitoring) and metacognitions related to the AI-based system (system monitoring). As a result, physicians fall for decisions based on beliefs rather than actual data or engage in unsuitably superficial evaluation of the AI advice. Our study has implications for the training of physicians and spotlights the crucial role of human actors in compensating for AI errors.
科研通智能强力驱动
Strongly Powered by AbleSci AI