Large language models in healthcare and medical domain: A review

医疗保健 计算机科学 政治学 法学
作者
Zabir Al Nazi,Wei Peng
出处
期刊:Cornell University - arXiv
标识
DOI:10.48550/arxiv.2401.06775
摘要

The deployment of large language models (LLMs) within the healthcare sector has sparked both enthusiasm and apprehension. These models exhibit the remarkable capability to provide proficient responses to free-text queries, demonstrating a nuanced understanding of professional medical knowledge. This comprehensive survey delves into the functionalities of existing LLMs designed for healthcare applications, elucidating the trajectory of their development, starting from traditional Pretrained Language Models (PLMs) to the present state of LLMs in healthcare sector. First, we explore the potential of LLMs to amplify the efficiency and effectiveness of diverse healthcare applications, particularly focusing on clinical language understanding tasks. These tasks encompass a wide spectrum, ranging from named entity recognition and relation extraction to natural language inference, multi-modal medical applications, document classification, and question-answering. Additionally, we conduct an extensive comparison of the most recent state-of-the-art LLMs in the healthcare domain, while also assessing the utilization of various open-source LLMs and highlighting their significance in healthcare applications. Furthermore, we present the essential performance metrics employed to evaluate LLMs in the biomedical domain, shedding light on their effectiveness and limitations. Finally, we summarize the prominent challenges and constraints faced by large language models in the healthcare sector, offering a holistic perspective on their potential benefits and shortcomings. This review provides a comprehensive exploration of the current landscape of LLMs in healthcare, addressing their role in transforming medical applications and the areas that warrant further research and development.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
赘婿应助科研通管家采纳,获得10
5秒前
汉堡包应助科研通管家采纳,获得10
5秒前
云月应助科研通管家采纳,获得200
5秒前
三黑猫应助科研通管家采纳,获得10
5秒前
共享精神应助科研通管家采纳,获得10
5秒前
李爱国应助科研通管家采纳,获得10
5秒前
香蕉觅云应助科研通管家采纳,获得10
5秒前
bkagyin应助科研通管家采纳,获得10
5秒前
Orange应助科研通管家采纳,获得10
5秒前
小周小周完成签到 ,获得积分10
5秒前
自信筮完成签到,获得积分10
6秒前
li完成签到,获得积分10
7秒前
坦率尔蝶完成签到 ,获得积分10
8秒前
40873完成签到 ,获得积分10
9秒前
Zyt完成签到,获得积分20
10秒前
CipherSage应助落寞万言采纳,获得10
15秒前
20秒前
ztt发布了新的文献求助10
20秒前
愉快天亦完成签到,获得积分10
21秒前
22秒前
kiyo完成签到,获得积分10
22秒前
Zyt发布了新的文献求助10
23秒前
颖儿完成签到,获得积分10
23秒前
azaizzz完成签到,获得积分10
25秒前
kiyo发布了新的文献求助10
25秒前
追寻砖家完成签到 ,获得积分10
26秒前
26秒前
28秒前
30秒前
31秒前
慕青应助绅士年代采纳,获得10
31秒前
聪明勇敢有力气完成签到,获得积分10
32秒前
罐罐儿应助瓜瓜采纳,获得10
33秒前
斯文败类应助Reeves采纳,获得10
37秒前
38秒前
落寞万言发布了新的文献求助10
42秒前
45秒前
樱桃窝窝头完成签到,获得积分10
46秒前
Francis_发布了新的文献求助30
47秒前
50秒前
高分求助中
LNG地下式貯槽指針(JGA Guideline-107)(LNG underground storage tank guidelines) 1000
Generalized Linear Mixed Models 第二版 1000
rhetoric, logic and argumentation: a guide to student writers 1000
QMS18Ed2 | process management. 2nd ed 1000
Asymptotically optimum binary codes with correction for losses of one or two adjacent bits 800
Preparation and Characterization of Five Amino-Modified Hyper-Crosslinked Polymers and Performance Evaluation for Aged Transformer Oil Reclamation 700
Operative Techniques in Pediatric Orthopaedic Surgery 510
热门求助领域 (近24小时)
化学 医学 材料科学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 免疫学 细胞生物学 电极
热门帖子
关注 科研通微信公众号,转发送积分 2925071
求助须知:如何正确求助?哪些是违规求助? 2572140
关于积分的说明 6946774
捐赠科研通 2225273
什么是DOI,文献DOI怎么找? 1182743
版权声明 589076
科研通“疑难数据库(出版商)”最低求助积分说明 578857