上市后监督
医疗保健
患者安全
医学
不利影响
政治学
药理学
法学
作者
Michael E. Matheny,Jie Yang,Joshua C. Smith,Colin G. Walsh,Mohammed Ali Al-Garadi,Sharon E. Davis,Keith Marsolo,Daniel Fabbri,Ruth Reeves,Kevin B. Johnson,Gerald J. Dal Pan,Robert Ball,Rishi Desai
出处
期刊:JAMA network open
[American Medical Association]
日期:2024-08-16
卷期号:7 (8): e2428276-e2428276
标识
DOI:10.1001/jamanetworkopen.2024.28276
摘要
Importance The Sentinel System is a key component of the US Food and Drug Administration (FDA) postmarketing safety surveillance commitment and uses clinical health care data to conduct analyses to inform drug labeling and safety communications, FDA advisory committee meetings, and other regulatory decisions. However, observational data are frequently deemed insufficient for reliable evaluation of safety concerns owing to limitations in underlying data or methodology. Advances in large language models (LLMs) provide new opportunities to address some of these limitations. However, careful consideration is necessary for how and where LLMs can be effectively deployed for these purposes. Observations LLMs may provide new avenues to support signal-identification activities to identify novel adverse event signals from narrative text of electronic health records. These algorithms may be used to support epidemiologic investigations examining the causal relationship between exposure to a medical product and an adverse event through development of probabilistic phenotyping of health outcomes of interest and extraction of information related to important confounding factors. LLMs may perform like traditional natural language processing tools by annotating text with controlled vocabularies with additional tailored training activities. LLMs offer opportunities for enhancing information extraction from adverse event reports, medical literature, and other biomedical knowledge sources. There are several challenges that must be considered when leveraging LLMs for postmarket surveillance. Prompt engineering is needed to ensure that LLM-extracted associations are accurate and specific. LLMs require extensive infrastructure to use, which many health care systems lack, and this can impact diversity, equity, and inclusion, and result in obscuring significant adverse event patterns in some populations. LLMs are known to generate nonfactual statements, which could lead to false positive signals and downstream evaluation activities by the FDA and other entities, incurring substantial cost. Conclusions and Relevance LLMs represent a novel paradigm that may facilitate generation of information to support medical product postmarket surveillance activities that have not been possible. However, additional work is required to ensure LLMs can be used in a fair and equitable manner, minimize false positive findings, and support the necessary rigor of signal detection needed for regulatory activities.
科研通智能强力驱动
Strongly Powered by AbleSci AI