聊天机器人
背景(考古学)
答疑
医学
计算机科学
情报检索
万维网
历史
考古
作者
David Steybe,Philipp Poxleitner,Suad Aljohani,Bente Brokstad Herlofson,Ourania Nicolatou‐Galitis,Vinod F. Patel,Stefano Fedele,Tae‐Geon Kwon,Vittorio Fusco,Sarina E.C. Pichardo,Katharina Theresa Obermeier,Sven Otto,Alexander Rau,Maximilian Frederik Russe
标识
DOI:10.1016/j.jcms.2024.12.009
摘要
The potential of large language models (LLMs) in medical applications is significant, and Retrieval-augmented generation (RAG) can address the weaknesses of these models in terms of data transparency and scientific accuracy by incorporating current scientific knowledge into responses. In this study, RAG and GPT-4 by OpenAI were applied to develop GuideGPT, a context aware chatbot integrated with a knowledge database from 449 scientific publications designed to provide answers on the prevention, diagnosis, and treatment of medication-related osteonecrosis of the jaw (MRONJ). A comparison was made with a generic LLM ("PureGPT") across 30 MRONJ-related questions. Ten international experts in MRONJ evaluated the responses based on content, language, scientific explanation, and agreement using 5-point Likert scales. Statistical analysis using the Mann-Whitney U test showed significantly better ratings for GuideGPT than PureGPT regarding content (p = 0.006), scientific explanation (p = 0.032), and agreement (p = 0.008), though not for language (p = 0.407). Thus, this study demonstrates RAG to be a promising tool to improve response quality and reliability of LLMs by incorporating domain-specific knowledge. This approach addresses the limitations of generic chatbots and can provide traceable and up-to-date responses essential for clinical practice.
科研通智能强力驱动
Strongly Powered by AbleSci AI