Large language models leverage external knowledge to extend clinical insight beyond language boundaries

杠杆(统计) 背景(考古学) 计算机科学 人工智能 英语 知识库 医学 心理学 数学教育 地理 考古
作者
Jiageng Wu,Xian Wu,Zhaopeng Qiu,Minghui Li,Shixu Lin,Yingying Zhang,Yefeng Zheng,Changzheng Yuan,Jie Yang
出处
期刊:Journal of the American Medical Informatics Association [Oxford University Press]
被引量:3
标识
DOI:10.1093/jamia/ocae079
摘要

Abstract Objectives Large Language Models (LLMs) such as ChatGPT and Med-PaLM have excelled in various medical question-answering tasks. However, these English-centric models encounter challenges in non-English clinical settings, primarily due to limited clinical knowledge in respective languages, a consequence of imbalanced training corpora. We systematically evaluate LLMs in the Chinese medical context and develop a novel in-context learning framework to enhance their performance. Materials and Methods The latest China National Medical Licensing Examination (CNMLE-2022) served as the benchmark. We collected 53 medical books and 381 149 medical questions to construct the medical knowledge base and question bank. The proposed Knowledge and Few-shot Enhancement In-context Learning (KFE) framework leverages the in-context learning ability of LLMs to integrate diverse external clinical knowledge sources. We evaluated KFE with ChatGPT (GPT-3.5), GPT-4, Baichuan2-7B, Baichuan2-13B, and QWEN-72B in CNMLE-2022 and further investigated the effectiveness of different pathways for incorporating LLMs with medical knowledge from 7 distinct perspectives. Results Directly applying ChatGPT failed to qualify for the CNMLE-2022 at a score of 51. Cooperated with the KFE framework, the LLMs with varying sizes yielded consistent and significant improvements. The ChatGPT’s performance surged to 70.04 and GPT-4 achieved the highest score of 82.59. This surpasses the qualification threshold (60) and exceeds the average human score of 68.70, affirming the effectiveness and robustness of the framework. It also enabled a smaller Baichuan2-13B to pass the examination, showcasing the great potential in low-resource settings. Discussion and Conclusion This study shed light on the optimal practices to enhance the capabilities of LLMs in non-English medical scenarios. By synergizing medical knowledge through in-context learning, LLMs can extend clinical insight beyond language barriers in healthcare, significantly reducing language-related disparities of LLM applications and ensuring global benefit in this field.

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
cyan发布了新的文献求助10
刚刚
不安初彤完成签到,获得积分10
刚刚
缓慢凝梦发布了新的文献求助10
1秒前
1秒前
Rainbow发布了新的文献求助10
1秒前
完美世界应助等你下课采纳,获得10
2秒前
nZk关闭了nZk文献求助
3秒前
遗迹小白发布了新的文献求助300
3秒前
吴龙完成签到,获得积分10
3秒前
万能图书馆应助天真的邴采纳,获得10
3秒前
今后应助天真的邴采纳,获得10
3秒前
研友_VZG7GZ应助天真的邴采纳,获得10
3秒前
丘比特应助天真的邴采纳,获得10
3秒前
CodeCraft应助天真的邴采纳,获得10
3秒前
徐徐徐应助徒tu采纳,获得10
4秒前
www完成签到 ,获得积分20
4秒前
4秒前
chee发布了新的文献求助30
4秒前
8秒前
8秒前
Ava应助zhaokkkk采纳,获得10
8秒前
CipherSage应助煜琪采纳,获得10
8秒前
master发布了新的文献求助30
8秒前
strong.quite发布了新的文献求助10
9秒前
9秒前
11秒前
hyaoooo完成签到 ,获得积分10
11秒前
科研通AI2S应助cjq采纳,获得10
11秒前
12秒前
崔尔蓉完成签到,获得积分10
13秒前
科学家发布了新的文献求助10
13秒前
14秒前
冷傲的迎南完成签到 ,获得积分10
14秒前
账户已注销应助najibveto采纳,获得30
14秒前
16秒前
17秒前
17秒前
19秒前
19秒前
19秒前
高分求助中
Evolution 10000
Sustainability in Tides Chemistry 2800
The Young builders of New china : the visit of the delegation of the WFDY to the Chinese People's Republic 1000
юрские динозавры восточного забайкалья 800
Diagnostic immunohistochemistry : theranostic and genomic applications 6th Edition 500
Chen Hansheng: China’s Last Romantic Revolutionary 500
China's Relations With Japan 1945-83: The Role of Liao Chengzhi 400
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3149194
求助须知:如何正确求助?哪些是违规求助? 2800255
关于积分的说明 7839329
捐赠科研通 2457827
什么是DOI,文献DOI怎么找? 1308138
科研通“疑难数据库(出版商)”最低求助积分说明 628428
版权声明 601706