作者
Daiju Ueda,Yasuhito Mitsuyama,Hirotaka Takita,Daisuke Horiuchi,Shannon L Walston,Hiroyuki Tatekawa,Yukio Miki
摘要
HomeRadiologyVol. 308, No. 1 PreviousNext Original ResearchComputer ApplicationsChatGPT's Diagnostic Performance from Patient History and Imaging Findings on the Diagnosis Please QuizzesDaiju Ueda1 , Yasuhito Mitsuyama2, Hirotaka Takita2, Daisuke Horiuchi2, Shannon L Walston2, Hiroyuki Tatekawa2, Yukio Miki2Daiju Ueda1 , Yasuhito Mitsuyama2, Hirotaka Takita2, Daisuke Horiuchi2, Shannon L Walston2, Hiroyuki Tatekawa2, Yukio Miki2Author AffiliationsCenter for Health Science Innovation, Osaka Metropolitan University, Osaka, JapanDepartment of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, JapanCorresponding Author: Daiju Ueda, MD Center for Health Science Innovation, Osaka Metropolitan University, 1-4-3, Asahi-machi, Abeno-ku, Osaka 545-8585, Japan Email: [email protected]Daiju Ueda1 Yasuhito Mitsuyama2Hirotaka Takita2Daisuke Horiuchi2Shannon L Walston2Hiroyuki Tatekawa2Yukio Miki2Published Online:Jul 18 2023https://doi.org/10.1148/radiol.231040MoreSectionsFull textPDF ToolsImage ViewerAdd to favoritesCiteTrack CitationsPermissionsReprints ShareShare onFacebookTwitterLinked In AbstractThis study evaluates GPT-4 based ChatGPT's performance in radiology using “Diagnosis Please” quizzes. With a 54% (170/313) overall accuracy, ChatGPT shows potential as a valuable diagnostic tool in radiology.Download as PowerPointReferences1. OpenAI. GPT-4 Technical Report. arXiv [cs.CL]. 2023. http://arxiv.org/abs/2303.08774. Google Scholar2. Eloundou T, Manning S, Mishkin P, Rock D. GPTs are GPTs: An early look at the labor market impact potential of large language models. arXiv [econ.GN]. 2023. http://arxiv.org/abs/2303.10130. Google Scholar3. Ueda D, Walston SL, Matsumoto T, Deguchi R, Tatekawa H, Miki Y. Evaluating GPT-4-based ChatGPT's Clinical Potential on the NEJM Quiz. medRxiv. 2023. p. 2023.05.04.23289493. doi: https://doi.org/10.1101/2023.05.04.23289493. Google Scholar4. Bossuyt PM, Reitsma JB, Bruns DE, et al. STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies. Radiology. 2015;277(3):826–832. Link, Google Scholar5. Juluru K, Shih H-H, Keshava Murthy KN, et al. Integrating Al Algorithms into the Clinical Workflow. Radiol Artif Intell. 2021;3(6):e210013. Link, Google Scholar6. Mollura DJ, Culp MP, Pollack E, et al. Artificial Intelligence in Low- and Middle-Income Countries: Innovating Global Health Radiology. Radiology. 2020;297(3):513–520. Link, Google ScholarArticle HistoryPublished online: July 18 2023 FiguresReferencesRelatedDetailsAccompanying This ArticleChatGPT in RadiologyAug 1 2023Default Digital Object SeriesRecommended Articles RSNA Education Exhibits RSNA Case Collection Vol. 308, No. 1 PodcastAbbreviations Abbreviations: ChatGPT Chat generative pre-trained transformer STARD Standards for reporting diagnostic accuracy studies Metrics Altmetric Score PDF download