BackgroundIn recent years, significant breakthroughs have been made in the field of natural language processing, particularly with the development of large language models (LLMs). LLMs have demonstrated remarkable capabilities on benchmarks related to general medical question answering, but there are fewer data about their performance in subspecialty fields and fewer studies still comparing the many available LLMs. These models have the potential to be used as a part of adaptive physician training, medical copilot applications, and digital patient interaction scenarios. The ability of LLMs to participate in medical training and patient care depends in part on their mastery of the knowledge content of specific medical fields. MethodsThis study investigated the medical knowledge capability of multiple LLMs in the context of their internal medicine subspecialty multiple-choice test-taking ability. We compared the performance of several open-source LLMs (Llama2-70B, Koala 7B, Falcon 7B, Stable-Vicuna 13B, and Orca-Mini 13B) with the proprietary models GPT-4 and Claude 2 on multiple-choice questions in the field of nephrology. Nephrology was chosen as an example of a conceptually complex subspecialty field in internal medicine. This study was conducted to evaluate the ability of LLMs to provide correct answers to Nephrology Self-Assessment Program (nephSAP) multiple-choice questions. These questions administered by the American Society of Nephrology help clinicians assess their knowledge in various topics in nephrology. ResultsThe overall success of open-source LLMs in answering the 858 nephSAP multiple-choice questions correctly was 17.1 to 30.6%. In contrast, Claude 2 answered 54.4% of the questions correctly, whereas GPT-4 achieved a score of 73.3%. A dataset containing questions and ground truth labels used to assess the LLMs has been made available. ConclusionsWe show that the current widely used open-source LLMs have poor zero-shot reasoning ability in nephrology compared with GPT-4 and Claude 2, illustrating knowledge gaps across LLMs relevant to future subspecialty medical training and patient care. (Funded by the Factor Family Foundation and others.)