模态(人机交互)
医学
工作流程
医学物理学
多样性(控制论)
放射科
人工智能
计算机科学
数据库
作者
Sishir Doddi,Taryn Hibshman,Oscar Salichs,Kaustav Bera,Charit Tippareddy,Nikhil H. Ramaiya,Sree Harsha Tirumani
标识
DOI:10.1067/j.cpradiol.2023.10.022
摘要
Artificial intelligence (AI) has recently become a trending tool and topic regarding productivity especially with publicly available free services such as ChatGPT and Bard. In this report, we investigate if two widely available chatbots chatGPT and Bard, are able to show consistent accurate responses for the best imaging modality for urologic clinical situations and if they are in line with American College of Radiology (ACR) Appropriateness Criteria (AC). All clinical scenarios provided by the ACR were inputted into ChatGPT and Bard with result compared to the ACR AC and recorded. Both chatbots had an appropriate imaging modality rate of of 62% and no significant difference in proportion of correct imaging modality was found overall between the two services (p>0.05). The results of our study found that both ChatGPT and Bard are similar in their ability to suggest the most appropriate imaging modality in a variety of urologic scenarios based on ACR AC criteria. Nonetheless, both chatbots lack consistent accuracy and further development is necessary for implementation in clinical settings. For proper use of these AI services in clinical decision making, further developments are needed to improve the workflow of physicians.
科研通智能强力驱动
Strongly Powered by AbleSci AI