利克特量表
前列腺癌
医疗保健
计算机科学
读写能力
比例(比率)
心理学
数据科学
医学
癌症
政治学
教育学
内科学
发展心理学
物理
量子力学
法学
作者
Marius Geantă,Daniel Bădescu,Narcis Chirca,Ovidiu Cătălin Nechita,Cosmin George Radu,Ștefan Rascu,D. Radavoi,Cristian Sima,Cristian Toma,Viorel Jinga
标识
DOI:10.3390/bioengineering11070654
摘要
This study assesses the effectiveness of chatbots powered by Large Language Models (LLMs)—ChatGPT 3.5, CoPilot, and Gemini—in delivering prostate cancer information, compared to the official Patient’s Guide. Using 25 expert-validated questions, we conducted a comparative analysis to evaluate accuracy, timeliness, completeness, and understandability through a Likert scale. Statistical analyses were used to quantify the performance of each model. Results indicate that ChatGPT 3.5 consistently outperformed the other models, establishing itself as a robust and reliable source of information. CoPilot also performed effectively, albeit slightly less so than ChatGPT 3.5. Despite the strengths of the Patient’s Guide, the advanced capabilities of LLMs like ChatGPT significantly enhance educational tools in healthcare. The findings underscore the need for ongoing innovation and improvement in AI applications within health sectors, especially considering the ethical implications underscored by the forthcoming EU AI Act. Future research should focus on investigating potential biases in AI-generated responses and their impact on patient outcomes.
科研通智能强力驱动
Strongly Powered by AbleSci AI