审问
工作(物理)
计算机科学
心理学
知识管理
人工智能
政治学
工程类
法学
机械工程
作者
Sarah Lebovitz,Hila Lifshitz‐Assaf,Natalia Levina
出处
期刊:Organization Science
[Institute for Operations Research and the Management Sciences]
日期:2022-01-01
卷期号:33 (1): 126-148
被引量:168
标识
DOI:10.1287/orsc.2021.1549
摘要
Artificial intelligence (AI) technologies promise to transform how professionals conduct knowledge work by augmenting their capabilities for making professional judgments. We know little, however, about how human-AI augmentation takes place in practice. Yet, gaining this understanding is particularly important when professionals use AI tools to form judgments on critical decisions. We conducted an in-depth field study in a major U.S. hospital where AI tools were used in three departments by diagnostic radiologists making breast cancer, lung cancer, and bone age determinations. The study illustrates the hindering effects of opacity that professionals experienced when using AI tools and explores how these professionals grappled with it in practice. In all three departments, this opacity resulted in professionals experiencing increased uncertainty because AI tool results often diverged from their initial judgment without providing underlying reasoning. Only in one department (of the three) did professionals consistently incorporate AI results into their final judgments, achieving what we call engaged augmentation. These professionals invested in AI interrogation practices—practices enacted by human experts to relate their own knowledge claims to AI knowledge claims. Professionals in the other two departments did not enact such practices and did not incorporate AI inputs into their final decisions, which we call unengaged “augmentation.” Our study unpacks the challenges involved in augmenting professional judgment with powerful, yet opaque, technologies and contributes to literature on AI adoption in knowledge work.
科研通智能强力驱动
Strongly Powered by AbleSci AI