作者
Burhan Coskun,Gökhan Ocakoğlu,Melih Yetemen,Onur Kaygısız
摘要
OBJECTIVE To evaluate the performance of ChatGPT, an artificial intelligence (AI) language model, in providing patient information on prostate cancer, and to compare the accuracy, similarity, and quality of the information to a reference source. METHODS Patient information material on prostate cancer was used as a reference source from the website of the European Association of Urology Patient Information. This was used to generate 59 queries. The accuracy of the model's content was determined with F1, precision, and recall scores. The similarity was assessed with cosine similarity, and the quality was evaluated using a 5-point Likert scale named General Quality Score (GQS). RESULTS ChatGPT was able to respond to all prostate cancer-related queries. The average F1 score was 0.426 (range: 0-1), precision score was 0.349 (range: 0-1), recall score was 0.549 (range: 0-1), and cosine similarity was 0.609 (range: 0-1). The average GQS was 3.62 ± 0.49 (range: 1-5), with no answers achieving the maximum GQS of 5. While ChatGPT produced a larger amount of information compared to the reference, the accuracy and quality of the content were not optimal, with all scores indicating need for improvement in the model's performance. CONCLUSION Caution should be exercised when using ChatGPT as a patient information source for prostate cancer due to limitations in its performance, which may lead to inaccuracies and potential misunderstandings. Further studies, using different topics and language models, are needed to fully understand the capabilities and limitations of AI-generated patient information. To evaluate the performance of ChatGPT, an artificial intelligence (AI) language model, in providing patient information on prostate cancer, and to compare the accuracy, similarity, and quality of the information to a reference source. Patient information material on prostate cancer was used as a reference source from the website of the European Association of Urology Patient Information. This was used to generate 59 queries. The accuracy of the model's content was determined with F1, precision, and recall scores. The similarity was assessed with cosine similarity, and the quality was evaluated using a 5-point Likert scale named General Quality Score (GQS). ChatGPT was able to respond to all prostate cancer-related queries. The average F1 score was 0.426 (range: 0-1), precision score was 0.349 (range: 0-1), recall score was 0.549 (range: 0-1), and cosine similarity was 0.609 (range: 0-1). The average GQS was 3.62 ± 0.49 (range: 1-5), with no answers achieving the maximum GQS of 5. While ChatGPT produced a larger amount of information compared to the reference, the accuracy and quality of the content were not optimal, with all scores indicating need for improvement in the model's performance. Caution should be exercised when using ChatGPT as a patient information source for prostate cancer due to limitations in its performance, which may lead to inaccuracies and potential misunderstandings. Further studies, using different topics and language models, are needed to fully understand the capabilities and limitations of AI-generated patient information.