作者
Carlo Reverberi,Tommaso Rigon,Aldo Solari,Cesare Hassan,Paolo Cherubini,Giulio Antonelli,Halim Awadie,Sebastian Bernhofer,Sabela Carballal,Mário Dinis‐Ribeiro,A Fernández-Clotet,Glòria Fernández–Esparrach,Ian M. Gralnek,Yuta Higasa,Taku Hirabayashi,Tatsuki Hirai,Mineo Iwatate,Miki Kawano,Markus Mader,A Maieron,Sebastian Mattes,Tastuya Nakai,Íngrid Ordás,Raquel Ortigão,Oswaldo Ortíz,María Pellisé,Cláudia Lúcia de Oliveira Pinto,Florian Riedl,Ariadna Sánchez,Emanuel Steiner,Yukari Tanaka,Andrea Cherubini
摘要
Abstract Artificial Intelligence ( ai ) systems are precious support for decision-making, with many applications also in the medical domain. The interaction between md s and ai enjoys a renewed interest following the increased possibilities of deep learning devices. However, we still have limited evidence-based knowledge of the context, design, and psychological mechanisms that craft an optimal human– ai collaboration. In this multicentric study, 21 endoscopists reviewed 504 videos of lesions prospectively acquired from real colonoscopies. They were asked to provide an optical diagnosis with and without the assistance of an ai support system. Endoscopists were influenced by ai ( $$\textsc {or}=3.05$$ OR=3.05 ), but not erratically: they followed the ai advice more when it was correct ( $$\textsc {or}=3.48$$ OR=3.48 ) than incorrect ( $$\textsc {or}=1.85$$ OR=1.85 ). Endoscopists achieved this outcome through a weighted integration of their and the ai opinions, considering the case-by-case estimations of the two reliabilities. This Bayesian-like rational behavior allowed the human– ai hybrid team to outperform both agents taken alone. We discuss the features of the human– ai interaction that determined this favorable outcome.