撕脱
考试(生物学)
牙撕脱
医学
数学教育
牙科
心理学
外科
古生物学
生物
作者
Taibe Tokgöz Kaplan,Muhammet Cankar
摘要
ABSTRACT Background In this study, the accuracy and comprehensiveness of the answers given to questions about dental avulsion by two artificial intelligence‐based language models, ChatGPT and Gemini, were comparatively evaluated. Materials and Methods Based on the guidelines of the International Society of Dental Traumatology, a total of 33 questions were prepared, including multiple‐choice questions, binary questions, and open‐ended questions as technical questions and patient questions about dental avulsion. They were directed to ChatGPT and Gemini. Responses were recorded and scored by four pediatric dentists. Statistical analyses, including ICC analysis, were performed to determine the agreement and accuracy of the responses. The significance level was set as p < 0.050. Results The mean score of the Gemini model was statistically significantly higher than the ChatGPT ( p = 0.001). ChatGPT gave more correct answers to open‐ended questions and T/F questions on dental avulsion; it showed the lowest accuracy in the MCQ section. There was no significant difference between the responses of the Gemini model to different types of questions on dental avulsion and the median scores ( p = 0.088). ChatGPT and Gemini were analyzed with the Mann–Whitney U test without making a distinction between question types, and Gemini answers were found to be statistically significantly more accurate ( p = 0.004). Conclusions The Gemini and ChatGPT language models based on the IADT guideline for dental avulsion undoubtedly show promise. To guarantee the successful incorporation of LLMs into practice, it is imperative to conduct additional research, clinical validation, and improvements to the models.
科研通智能强力驱动
Strongly Powered by AbleSci AI