神经组阅片室
乳腺摄影术
医学
放射科
医学物理学
介入放射学
核医学
内科学
乳腺癌
神经学
癌症
精神科
作者
Leonardo C. Almeida,Eduardo Moreno Júdice de Mattos Farina,Paulo E. A. Kuriki,Nitamar Abdala,Felipe Kitamura
出处
期刊:Radiology
[Radiological Society of North America]
日期:2024-01-01
卷期号:6 (1)
被引量:11
摘要
This prospective exploratory study conducted from January 2023 through May 2023 evaluated the ability of ChatGPT to answer questions from Brazilian radiology board examinations, exploring how different prompt strategies can influence performance using GPT-3.5 and GPT-4. Three multiple-choice board examinations that did not include image-based questions were evaluated: (a) radiology and diagnostic imaging, (b) mammography, and (c) neuroradiology. Five different styles of zero-shot prompting were tested: (a) raw question, (b) brief instruction, (c) long instruction, (d) chain-of-thought, and (e) question-specific automatic prompt generation (QAPG). The QAPG and brief instruction prompt strategies performed best for all examinations (P < .05), obtaining passing scores (≥60%) on the radiology and diagnostic imaging examination when testing both versions of ChatGPT. The QAPG style achieved a score of 60% for the mammography examination using GPT-3.5 and 76% using GPT-4. GPT-4 achieved a score up to 65% in the neuroradiology examination. The long instruction style consistently underperformed, implying that excessive detail might harm performance. GPT-4's scores were less sensitive to prompt style changes. The QAPG prompt style showed a high volume of the "A" option but no statistical difference, suggesting bias was found. GPT-4 passed all three radiology board examinations, and GPT-3.5 passed two of three examinations when using an optimal prompt style. Keywords: ChatGPT, Artificial Intelligence, Board Examinations, Radiology and Diagnostic Imaging, Mammography, Neuroradiology © RSNA, 2023 See also the commentary by Trivedi and Gichoya in this issue.
科研通智能强力驱动
Strongly Powered by AbleSci AI