计算机科学
生成语法
人工智能
变压器
自然语言处理
关系抽取
推论
机器学习
生成模型
语言模型
规范化(社会学)
信息抽取
社会学
物理
电压
量子力学
人类学
作者
Peng Cheng,Xi Yang,Aokun Chen,Zehao Yu,Kaleb E Smith,Anthony Costa,Mona G. Flores,Jiang Bian,Yonghui Wu
标识
DOI:10.1093/jamia/ocae078
摘要
Abstract Objective To solve major clinical natural language processing (NLP) tasks using a unified text-to-text learning architecture based on a generative large language model (LLM) via prompt tuning. Methods We formulated 7 key clinical NLP tasks as text-to-text learning and solved them using one unified generative clinical LLM, GatorTronGPT, developed using GPT-3 architecture and trained with up to 20 billion parameters. We adopted soft prompts (ie, trainable vectors) with frozen LLM, where the LLM parameters were not updated (ie, frozen) and only the vectors of soft prompts were updated, known as prompt tuning. We added additional soft prompts as a prefix to the input layer, which were optimized during the prompt tuning. We evaluated the proposed method using 7 clinical NLP tasks and compared them with previous task-specific solutions based on Transformer models. Results and Conclusion The proposed approach achieved state-of-the-art performance for 5 out of 7 major clinical NLP tasks using one unified generative LLM. Our approach outperformed previous task-specific transformer models by ∼3% for concept extraction and 7% for relation extraction applied to social determinants of health, 3.4% for clinical concept normalization, 3.4%-10% for clinical abbreviation disambiguation, and 5.5%-9% for natural language inference. Our approach also outperformed a previously developed prompt-based machine reading comprehension (MRC) model, GatorTron-MRC, for clinical concept and relation extraction. The proposed approach can deliver the “one model for all” promise from training to deployment using a unified generative LLM.
科研通智能强力驱动
Strongly Powered by AbleSci AI