清晰
医疗保健
完备性(序理论)
计算机科学
医学教育
心理学
医疗急救
医学
生物化学
数学分析
化学
数学
经济
经济增长
作者
Siru Liu,Aileen P. Wright,Allison B. McCoy,Sean S Huang,Julian Z Genkins,Josh F. Peterson,Yaa Kumah-Crystal,William Martínez,Babatunde Carew,Dara Mize,Bryan D. Steitz,Adam Wright
标识
DOI:10.1093/jamia/ocae142
摘要
Abstract Objective This study aims to investigate the feasibility of using Large Language Models (LLMs) to engage with patients at the time they are drafting a question to their healthcare providers, and generate pertinent follow-up questions that the patient can answer before sending their message, with the goal of ensuring that their healthcare provider receives all the information they need to safely and accurately answer the patient’s question, eliminating back-and-forth messaging, and the associated delays and frustrations. Methods We collected a dataset of patient messages sent between January 1, 2022 to March 7, 2023 at Vanderbilt University Medical Center. Two internal medicine physicians identified 7 common scenarios. We used 3 LLMs to generate follow-up questions: (1) Comprehensive LLM Artificial Intelligence Responder (CLAIR): a locally fine-tuned LLM, (2) GPT4 with a simple prompt, and (3) GPT4 with a complex prompt. Five physicians rated them with the actual follow-ups written by healthcare providers on clarity, completeness, conciseness, and utility. Results For five scenarios, our CLAIR model had the best performance. The GPT4 model received higher scores for utility and completeness but lower scores for clarity and conciseness. CLAIR generated follow-up questions with similar clarity and conciseness as the actual follow-ups written by healthcare providers, with higher utility than healthcare providers and GPT4, and lower completeness than GPT4, but better than healthcare providers. Conclusion LLMs can generate follow-up patient messages designed to clarify a medical question that compares favorably to those generated by healthcare providers.
科研通智能强力驱动
Strongly Powered by AbleSci AI