VetLLM: Large Language Model for Predicting Diagnosis from Veterinary Notes

计算机科学 自然语言处理 人工智能
作者
Yixing Jiang,Jeremy Irvin,Andrew Y. Ng,James Zou
标识
DOI:10.1142/9789811286421_0010
摘要

Biocomputing 2024, pp. 120-133 (2023) Open AccessVetLLM: Large Language Model for Predicting Diagnosis from Veterinary NotesYixing Jiang, Jeremy A. Irvin, Andrew Y. Ng, and James ZouYixing JiangStanford University, Stanford, CA, United States, Jeremy A. IrvinStanford University, Stanford, CA, United States, Andrew Y. NgStanford University, Stanford, CA, United States, and James ZouStanford University, Stanford, CA, United Stateshttps://doi.org/10.1142/9789811286421_0010Cited by:0 (Source: Crossref) PreviousNext AboutSectionsPDF/EPUB ToolsAdd to favoritesDownload CitationsTrack CitationsRecommend to Library ShareShare onFacebookTwitterLinked InRedditEmail Abstract: Lack of diagnosis coding is a barrier to leveraging veterinary notes for medical and public health research. Previous work is limited to develop specialized rule-based or customized supervised learning models to predict diagnosis coding, which is tedious and not easily transferable. In this work, we show that open-source large language models (LLMs) pretrained on general corpus can achieve reasonable performance in a zero-shot setting. Alpaca-7B can achieve a zero-shot F1 of 0.538 on CSU test data and 0.389 on PP test data, two standard benchmarks for coding from veterinary notes. Furthermore, with appropriate fine-tuning, the performance of LLMs can be substantially boosted, exceeding those of strong state-of-the-art supervised models. VetLLM, which is fine-tuned on Alpaca-7B using just 5000 veterinary notes, can achieve a F1 of 0.747 on CSU test data and 0.637 on PP test data. It is of note that our fine-tuning is data-efficient: using 200 notes can outperform supervised models trained with more than 100,000 notes. The findings demonstrate the great potential of leveraging LLMs for language processing tasks in medicine, and we advocate this new paradigm for processing clinical text. Keywords: Diagnosis ExtractionVeterinary NotesVeterinary MedicineLarge Language ModelsLLMFoundation Models FiguresReferencesRelatedDetails Recommended Biocomputing 2024Metrics History Information© The AuthorsOpen Access chapter published by World Scientific Publishing Company and distributed under the terms of the Creative Commons Attribution Non-Commercial (CC BY-NC) 4.0 License.KeywordsDiagnosis ExtractionVeterinary NotesVeterinary MedicineLarge Language ModelsLLMFoundation ModelsPDF download

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
CGM发布了新的文献求助10
刚刚
deneb发布了新的文献求助10
1秒前
xingxing完成签到 ,获得积分10
2秒前
2秒前
赵英哲完成签到,获得积分10
4秒前
搜集达人应助周肥采纳,获得10
4秒前
瘦瘦寻菡完成签到,获得积分10
4秒前
科研通AI6.2应助Elient_采纳,获得10
4秒前
4秒前
6秒前
AAA完成签到,获得积分10
7秒前
8秒前
8秒前
呼呼爱学习完成签到,获得积分10
8秒前
9秒前
r41r32完成签到,获得积分10
9秒前
Perry发布了新的文献求助10
11秒前
11秒前
11秒前
高强发布了新的文献求助10
11秒前
13秒前
我是老大应助zhoushishan采纳,获得10
13秒前
瘦瘦寻菡发布了新的文献求助10
14秒前
14秒前
研友_08oErn发布了新的文献求助10
15秒前
16秒前
单纯的富发布了新的文献求助10
16秒前
热情嘉懿完成签到,获得积分10
16秒前
冉西发布了新的文献求助50
18秒前
恰你完成签到,获得积分10
18秒前
青青发布了新的文献求助20
19秒前
111发布了新的文献求助10
22秒前
科学家发布了新的文献求助10
23秒前
吸氧羊发布了新的文献求助10
23秒前
小蘑菇应助liyh采纳,获得10
23秒前
江月渡完成签到 ,获得积分10
23秒前
23秒前
25秒前
26秒前
CGM完成签到,获得积分10
26秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Inorganic Chemistry Eighth Edition 1200
Free parameter models in liquid scintillation counting 1000
Anionic polymerization of acenaphthylene: identification of impurity species formed as by-products 1000
Standards for Molecular Testing for Red Cell, Platelet, and Neutrophil Antigens, 7th edition 1000
HANDBOOK OF CHEMISTRY AND PHYSICS 106th edition 1000
ASPEN Adult Nutrition Support Core Curriculum, Fourth Edition 1000
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6312486
求助须知:如何正确求助?哪些是违规求助? 8129055
关于积分的说明 17034632
捐赠科研通 5369496
什么是DOI,文献DOI怎么找? 2850872
邀请新用户注册赠送积分活动 1828658
关于科研通互助平台的介绍 1680943