VetLLM: Large Language Model for Predicting Diagnosis from Veterinary Notes

计算机科学 自然语言处理 人工智能
作者
Yixing Jiang,Jeremy Irvin,Andrew Y. Ng,James Zou
标识
DOI:10.1142/9789811286421_0010
摘要

Biocomputing 2024, pp. 120-133 (2023) Open AccessVetLLM: Large Language Model for Predicting Diagnosis from Veterinary NotesYixing Jiang, Jeremy A. Irvin, Andrew Y. Ng, and James ZouYixing JiangStanford University, Stanford, CA, United States, Jeremy A. IrvinStanford University, Stanford, CA, United States, Andrew Y. NgStanford University, Stanford, CA, United States, and James ZouStanford University, Stanford, CA, United Stateshttps://doi.org/10.1142/9789811286421_0010Cited by:0 (Source: Crossref) PreviousNext AboutSectionsPDF/EPUB ToolsAdd to favoritesDownload CitationsTrack CitationsRecommend to Library ShareShare onFacebookTwitterLinked InRedditEmail Abstract: Lack of diagnosis coding is a barrier to leveraging veterinary notes for medical and public health research. Previous work is limited to develop specialized rule-based or customized supervised learning models to predict diagnosis coding, which is tedious and not easily transferable. In this work, we show that open-source large language models (LLMs) pretrained on general corpus can achieve reasonable performance in a zero-shot setting. Alpaca-7B can achieve a zero-shot F1 of 0.538 on CSU test data and 0.389 on PP test data, two standard benchmarks for coding from veterinary notes. Furthermore, with appropriate fine-tuning, the performance of LLMs can be substantially boosted, exceeding those of strong state-of-the-art supervised models. VetLLM, which is fine-tuned on Alpaca-7B using just 5000 veterinary notes, can achieve a F1 of 0.747 on CSU test data and 0.637 on PP test data. It is of note that our fine-tuning is data-efficient: using 200 notes can outperform supervised models trained with more than 100,000 notes. The findings demonstrate the great potential of leveraging LLMs for language processing tasks in medicine, and we advocate this new paradigm for processing clinical text. Keywords: Diagnosis ExtractionVeterinary NotesVeterinary MedicineLarge Language ModelsLLMFoundation Models FiguresReferencesRelatedDetails Recommended Biocomputing 2024Metrics History Information© The AuthorsOpen Access chapter published by World Scientific Publishing Company and distributed under the terms of the Creative Commons Attribution Non-Commercial (CC BY-NC) 4.0 License.KeywordsDiagnosis ExtractionVeterinary NotesVeterinary MedicineLarge Language ModelsLLMFoundation ModelsPDF download
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
无花果应助义气代梅采纳,获得10
1秒前
之乎者也完成签到,获得积分10
1秒前
花海发布了新的文献求助10
1秒前
小二郎应助bhbmn采纳,获得30
2秒前
2秒前
咖褐发布了新的文献求助10
2秒前
gwt发布了新的文献求助10
3秒前
洒脱完成签到 ,获得积分10
3秒前
吞金发布了新的文献求助10
4秒前
4秒前
5秒前
5秒前
湘崽丫完成签到 ,获得积分10
5秒前
viper3完成签到,获得积分10
5秒前
6秒前
乐乐应助xyyl采纳,获得10
7秒前
7秒前
8秒前
9秒前
9秒前
Dream发布了新的文献求助10
9秒前
sunbai发布了新的文献求助10
9秒前
equinox发布了新的文献求助10
9秒前
10秒前
10秒前
葛稀驳回了Akim应助
10秒前
11秒前
11秒前
852应助咖褐采纳,获得10
11秒前
11秒前
12秒前
12秒前
张111发布了新的文献求助10
12秒前
hbhbj发布了新的文献求助10
12秒前
TearMarks发布了新的文献求助10
13秒前
所所应助LYZ采纳,获得10
13秒前
吞金完成签到,获得积分10
13秒前
lin发布了新的文献求助10
13秒前
科研通AI6应助小笨嘴采纳,获得10
14秒前
zxf完成签到,获得积分20
15秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Fermented Coffee Market 2000
PARLOC2001: The update of loss containment data for offshore pipelines 500
Critical Thinking: Tools for Taking Charge of Your Learning and Your Life 4th Edition 500
Phylogenetic study of the order Polydesmida (Myriapoda: Diplopoda) 500
A Manual for the Identification of Plant Seeds and Fruits : Second revised edition 500
Constitutional and Administrative Law 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5264928
求助须知:如何正确求助?哪些是违规求助? 4425065
关于积分的说明 13775359
捐赠科研通 4300354
什么是DOI,文献DOI怎么找? 2359671
邀请新用户注册赠送积分活动 1355731
关于科研通互助平台的介绍 1317058