计算机科学
生物医学
数据科学
最佳实践
政治学
生物信息学
生物
法学
作者
Satya S. Sahoo,Joseph M. Plasek,Hua Xu,Özlem Uzuner,Trevor Cohen,Meliha Yetişgen,Hongfang Liu,Stéphane M. Meystre,Yanshan Wang
标识
DOI:10.1093/jamia/ocae074
摘要
Generative large language models (LLMs) are a subset of transformers-based neural network architecture models. LLMs have successfully leveraged a combination of an increased number of parameters, improvements in computational efficiency, and large pre-training datasets to perform a wide spectrum of natural language processing (NLP) tasks. Using a few examples (few-shot) or no examples (zero-shot) for prompt-tuning has enabled LLMs to achieve state-of-the-art performance in a broad range of NLP applications. This article by the American Medical Informatics Association (AMIA) NLP Working Group characterizes the opportunities, challenges, and best practices for our community to leverage and advance the integration of LLMs in downstream NLP applications effectively. This can be accomplished through a variety of approaches, including augmented prompting, instruction prompt tuning, and reinforcement learning from human feedback (RLHF).
科研通智能强力驱动
Strongly Powered by AbleSci AI