LEAP: LLM instruction-example adaptive prompting framework for biomedical relation extraction

计算机科学 稳健性(进化) 任务(项目管理) 关系(数据库) 关系抽取 人工智能 机器学习 数据挖掘 工程类 系统工程 化学 生物化学 基因
作者
Huixue Zhou,Mingchen Li,Yongkang Xiao,Han Yang,Rui Zhang
出处
期刊:Journal of the American Medical Informatics Association [Oxford University Press]
卷期号:31 (9): 2010-2018 被引量:5
标识
DOI:10.1093/jamia/ocae147
摘要

Abstract Objective To investigate the demonstration in large language models (LLMs) for biomedical relation extraction. This study introduces a framework comprising three types of adaptive tuning methods to assess their impacts and effectiveness. Materials and Methods Our study was conducted in two phases. Initially, we analyzed a range of demonstration components vital for LLMs’ biomedical data capabilities, including task descriptions and examples, experimenting with various combinations. Subsequently, we introduced the LLM instruction-example adaptive prompting (LEAP) framework, including instruction adaptive tuning, example adaptive tuning, and instruction-example adaptive tuning methods. This framework aims to systematically investigate both adaptive task descriptions and adaptive examples within the demonstration. We assessed the performance of the LEAP framework on the DDI, ChemProt, and BioRED datasets, employing LLMs such as Llama2-7b, Llama2-13b, and MedLLaMA_13B. Results Our findings indicated that Instruction + Options + Example and its expanded form substantially improved F1 scores over the standard Instruction + Options mode for zero-shot LLMs. The LEAP framework, particularly through its example adaptive prompting, demonstrated superior performance over conventional instruction tuning across all models. Notably, the MedLLAMA_13B model achieved an exceptional F1 score of 95.13 on the ChemProt dataset using this method. Significant improvements were also observed in the DDI 2013 and BioRED datasets, confirming the method’s robustness in sophisticated data extraction scenarios. Conclusion The LEAP framework offers a compelling strategy for enhancing LLM training strategies, steering away from extensive fine-tuning towards more dynamic and contextually enriched prompting methodologies, showcasing in biomedical relation extraction.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
66m37发布了新的文献求助10
1秒前
4秒前
小二郎应助优美的山晴采纳,获得10
4秒前
英姑应助爱学习的鼠鼠采纳,获得30
4秒前
天天快乐应助kento采纳,获得30
5秒前
噜啦啦发布了新的文献求助10
8秒前
lull发布了新的文献求助10
9秒前
9秒前
9秒前
Lucas应助Lxdan采纳,获得10
9秒前
Ava应助看看采纳,获得10
10秒前
森木完成签到 ,获得积分10
10秒前
12秒前
13秒前
优美的山晴完成签到,获得积分10
14秒前
14秒前
暴躁的马里奥完成签到,获得积分10
15秒前
gffh发布了新的文献求助30
15秒前
Enchanted完成签到,获得积分10
15秒前
田様应助熊猫骑手采纳,获得10
16秒前
16秒前
ferritin完成签到 ,获得积分10
17秒前
友好绝义完成签到,获得积分10
17秒前
海燕完成签到 ,获得积分10
19秒前
19秒前
科研通AI2S应助有机分子笼采纳,获得10
25秒前
25秒前
彭于彦祖应助研友_想想采纳,获得20
25秒前
spwan应助科研通管家采纳,获得30
27秒前
27秒前
28秒前
在水一方应助科研通管家采纳,获得10
28秒前
梁朝伟应助科研通管家采纳,获得10
28秒前
zhen应助科研通管家采纳,获得10
28秒前
29秒前
山鲁佐德爱文献完成签到 ,获得积分10
30秒前
30秒前
lull发布了新的文献求助10
31秒前
君君发布了新的文献求助10
32秒前
yaoyao发布了新的文献求助10
33秒前
高分求助中
歯科矯正学 第7版(或第5版) 1004
SIS-ISO/IEC TS 27100:2024 Information technology — Cybersecurity — Overview and concepts (ISO/IEC TS 27100:2020, IDT)(Swedish Standard) 1000
Semiconductor Process Reliability in Practice 1000
Smart but Scattered: The Revolutionary Executive Skills Approach to Helping Kids Reach Their Potential (第二版) 1000
GROUP-THEORY AND POLARIZATION ALGEBRA 500
Mesopotamian divination texts : conversing with the gods : sources from the first millennium BCE 500
Days of Transition. The Parsi Death Rituals(2011) 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3233285
求助须知:如何正确求助?哪些是违规求助? 2879856
关于积分的说明 8212977
捐赠科研通 2547323
什么是DOI,文献DOI怎么找? 1376744
科研通“疑难数据库(出版商)”最低求助积分说明 647692
邀请新用户注册赠送积分活动 623115