计算机科学
GCM转录因子
运筹学
航空学
数学
工程类
生物
生态学
气候变化
大气环流模式
作者
Meziane Silhadi,Wissam B. Nassrallah,David Mikhail,Daniel Milad,Mona Harissi‐Dagher
标识
DOI:10.1016/j.jcjo.2025.01.001
摘要
To evaluate the performance of large language models (LLMs), specifically Microsoft Copilot, GPT-4 (GPT-4o and GPT-4o mini), and Google Gemini (Gemini and Gemini Advanced), in answering ophthalmological questions and assessing the impact of prompting techniques on their accuracy. Prospective qualitative study. Microsoft Copilot, GPT-4 (GPT-4o and GPT-4o mini), and Google Gemini (Gemini and Gemini Advanced). A total of 300 ophthalmological questions from StatPearls were tested, covering a range of subspecialties and image-based tasks. Each question was evaluated using 2 prompting techniques: zero-shot forced prompting (prompt 1) and combined role-based and zero-shot plan-and-solve+ prompting (prompt 2). With zero-shot forced prompting, GPT-4o demonstrated significantly superior overall performance, correctly answering 72.3% of questions and outperforming all other models, including Copilot (53.7%), GPT-4o mini (62.0%), Gemini (54.3%), and Gemini Advanced (62.0%) (p < 0.0001). Both Copilot and GPT-4o showed notable improvements with Prompt 2 over Prompt 1, elevating Copilot's accuracy from the lowest (53.7%) to the second highest (72.3%) among the evaluated LLMs. While newer iterations of LLMs, such as GPT-4o and Gemini Advanced, outperformed their less advanced counterparts (GPT-4o mini and Gemini), this study emphasizes the need for caution in clinical applications of these models. The choice of prompting techniques significantly influences performance, highlighting the necessity for further research to refine LLMs capabilities, particularly in visual data interpretation, to ensure their safe integration into medical practice.
科研通智能强力驱动
Strongly Powered by AbleSci AI