医学诊断
医学
神经组阅片室
放射科
置信区间
鉴别诊断
医学物理学
介入放射学
会话(web分析)
诊断准确性
工作流程
计算机科学
神经学
病理
数据库
内科学
精神科
万维网
作者
Robert Siepmann,Marc Sebastian Huppertz,Annika Rastkhiz,Matthias Reen,Eric Corban,Christian Schmidt,Stephan Wilke,Philipp Schad,Can Yüksel,Christiane Kühl,Daniel Truhn,Sven Nebelung
标识
DOI:10.1007/s00330-024-10727-2
摘要
Abstract Objectives Large language models (LLMs) have shown potential in radiology, but their ability to aid radiologists in interpreting imaging studies remains unexplored. We investigated the effects of a state-of-the-art LLM (GPT-4) on the radiologists’ diagnostic workflow. Materials and methods In this retrospective study, six radiologists of different experience levels read 40 selected radiographic [ n = 10], CT [ n = 10], MRI [ n = 10], and angiographic [ n = 10] studies unassisted (session one) and assisted by GPT-4 (session two). Each imaging study was presented with demographic data, the chief complaint, and associated symptoms, and diagnoses were registered using an online survey tool. The impact of Artificial Intelligence (AI) on diagnostic accuracy, confidence, user experience, input prompts, and generated responses was assessed. False information was registered. Linear mixed-effect models were used to quantify the factors (fixed: experience, modality, AI assistance; random: radiologist) influencing diagnostic accuracy and confidence. Results When assessing if the correct diagnosis was among the top-3 differential diagnoses, diagnostic accuracy improved slightly from 181/240 (75.4%, unassisted) to 188/240 (78.3%, AI-assisted). Similar improvements were found when only the top differential diagnosis was considered. AI assistance was used in 77.5% of the readings. Three hundred nine prompts were generated, primarily involving differential diagnoses (59.1%) and imaging features of specific conditions (27.5%). Diagnostic confidence was significantly higher when readings were AI-assisted ( p > 0.001). Twenty-three responses (7.4%) were classified as hallucinations, while two (0.6%) were misinterpretations. Conclusion Integrating GPT-4 in the diagnostic process improved diagnostic accuracy slightly and diagnostic confidence significantly. Potentially harmful hallucinations and misinterpretations call for caution and highlight the need for further safeguarding measures. Clinical relevance statement Using GPT-4 as a virtual assistant when reading images made six radiologists of different experience levels feel more confident and provide more accurate diagnoses; yet, GPT-4 gave factually incorrect and potentially harmful information in 7.4% of its responses.
科研通智能强力驱动
Strongly Powered by AbleSci AI