Medical large language models are vulnerable to data-poisoning attacks

误传 计算机科学 危害 互联网 互联网隐私 计算机安全 医疗保健 数据科学 心理学 万维网 政治学 社会心理学 法学
作者
Daniel Alber,Zihao Yang,Anton Alyakin,Eunice Yang,N. Shesh,Aly Valliani,Jeff Zhang,Gabriel R. Rosenbaum,Ashley K. Amend-Thomas,David B. Kurland,C. Kremer,Alexander Eremiev,Bruck Negash,Daniel D. Wiggan,M. Nakatsuka,Karl L. Sangwon,Sean N. Neifert,Hammad A. Khan,Akshay Save,Adhith Palla,Eric A. Grin,Monika Hedman,Mustafa Nasir-Moin,Xujin Chris Liu,Lavender Yao Jiang,Michal Mankowski,Dorry L. Segev,Yindalon Aphinyanaphongs,Howard A. Riina,John G. Golfinos,Daniel A. Orringer,Douglas Kondziolka,Eric K. Oermann
出处
期刊:Nature Medicine [Nature Portfolio]
标识
DOI:10.1038/s41591-024-03445-1
摘要

The adoption of large language models (LLMs) in healthcare demands a careful analysis of their potential to spread false medical knowledge. Because LLMs ingest massive volumes of data from the open Internet during training, they are potentially exposed to unverified medical knowledge that may include deliberately planted misinformation. Here, we perform a threat assessment that simulates a data-poisoning attack against The Pile, a popular dataset used for LLM development. We find that replacement of just 0.001% of training tokens with medical misinformation results in harmful models more likely to propagate medical errors. Furthermore, we discover that corrupted models match the performance of their corruption-free counterparts on open-source benchmarks routinely used to evaluate medical LLMs. Using biomedical knowledge graphs to screen medical LLM outputs, we propose a harm mitigation strategy that captures 91.9% of harmful content (F1 = 85.7%). Our algorithm provides a unique method to validate stochastically generated LLM outputs against hard-coded relationships in knowledge graphs. In view of current calls for improved data provenance and transparent LLM development, we hope to raise awareness of emergent risks from LLMs trained indiscriminately on web-scraped data, particularly in healthcare where misinformation can potentially compromise patient safety. Large language models can be manipulated to generate misinformation by poisoning of a very small percentage of the data on which they are trained, but a harm mitigation strategy using biomedical knowledge graphs can offer a method for addressing this vulnerability.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
ZhouYuqiao发布了新的文献求助10
刚刚
研友_VZG7GZ应助嘎巴一下采纳,获得10
刚刚
代维健的大黑完成签到,获得积分10
刚刚
乐乐应助陈千里采纳,获得10
4秒前
Gcole完成签到,获得积分10
5秒前
JamesPei应助陈笙采纳,获得10
5秒前
6秒前
6秒前
冷艳咖啡豆完成签到,获得积分10
6秒前
7秒前
77777D完成签到,获得积分10
8秒前
白梓发布了新的文献求助10
10秒前
GJQ4发布了新的文献求助10
11秒前
hr520824完成签到,获得积分10
11秒前
yuki完成签到,获得积分10
12秒前
证明发布了新的文献求助10
12秒前
善学以致用应助陈千里采纳,获得10
12秒前
小马甲应助Eina采纳,获得10
14秒前
科研通AI6.4应助嘿嘿啊哈采纳,获得50
14秒前
15秒前
17秒前
hhhhyyymm完成签到,获得积分10
17秒前
18秒前
桐藤渚完成签到 ,获得积分10
19秒前
qqqq发布了新的文献求助10
20秒前
20秒前
在水一方应助陈千里采纳,获得10
21秒前
温暖的天与完成签到 ,获得积分10
22秒前
球球完成签到,获得积分10
22秒前
23秒前
叶崽发布了新的文献求助10
23秒前
鹤野完成签到 ,获得积分10
24秒前
24秒前
24秒前
25秒前
隐形曼青应助科研通管家采纳,获得20
26秒前
26秒前
小蘑菇应助科研通管家采纳,获得10
26秒前
传奇3应助科研通管家采纳,获得10
26秒前
wanci应助科研通管家采纳,获得10
26秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
PowerCascade: A Synthetic Dataset for Cascading Failure Analysis in Power Systems 2000
Various Faces of Animal Metaphor in English and Polish 800
Signals, Systems, and Signal Processing 610
Photodetectors: From Ultraviolet to Infrared 500
Diagnostic Performance of Preoperative Imaging-based Radiomics Models for Predicting Liver Metastases in Colorectal Cancer: A Systematic Review and Meta-analysis 500
On the Dragon Seas, a sailor's adventures in the far east 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6347769
求助须知:如何正确求助?哪些是违规求助? 8162571
关于积分的说明 17170623
捐赠科研通 5403946
什么是DOI,文献DOI怎么找? 2861589
邀请新用户注册赠送积分活动 1839400
关于科研通互助平台的介绍 1688725