计算机科学
人工智能
可用性
医学诊断
光学(聚焦)
集合(抽象数据类型)
机器学习
感知
任务(项目管理)
人机交互
医学
放射科
经济
程序设计语言
管理
神经科学
物理
光学
生物
作者
Federico Cabitza,Andrea Campagner,Lorenzo Famiglini,Enrico Gallazzi,Giovanni Andrea La Maida
标识
DOI:10.1007/978-3-031-14463-9_3
摘要
Although deep learning-based AI systems for diagnostic imaging tasks have virtually showed superhuman accuracy, their use in medical settings has been questioned due to their "black box", not interpretable nature. To address this shortcoming, several methods have been proposed to make AI eXplainable (XAI), including Pixel Attribution Methods; however, it is still unclear whether these methods are actually effective in "opening" the black-box and improving diagnosis, particularly in tasks where pathological conditions are difficult to detect. In this study, we focus on the detection of thoraco-lumbar fractures from X-rays with the goal of assessing the impact of PAMs on diagnostic decision making by addressing two separate research questions: first, whether activation maps (as an instance of PAM) were perceived as useful in the aforementioned task; and, second, whether maps were also capable to reduce the diagnostic error rate. We show that, even though AMs were not considered significantly useful by physicians, the image readers found high value in the maps in relation to other perceptual dimensions (i.e., pertinency, coherence) and, most importantly, their accuracy significantly improved when given XAI support in a pilot study involving 7 doctors in the interpretation of a small, but carefully chosen, set of images.
科研通智能强力驱动
Strongly Powered by AbleSci AI