自动汇总
计算机科学
语言模型
钥匙(锁)
秩(图论)
点(几何)
数据科学
人工智能
自然语言处理
情报检索
计算机安全
几何学
数学
组合数学
作者
Joyeeta Goswami,Kaushal Kumar Prajapati,Ashim Saha,Apu Kumar Saha
标识
DOI:10.1016/j.asoc.2024.111531
摘要
Text summarization in medical domain is one of the most crucial chores as it deals with the critical human information. Consequently the proper summarization and key point extraction from medical deeds using pre-trained Language models is now the key figure to be focused on for the researchers. But due to the considerable amount of real-world data and enormous amount of memory requirement to train the Large Language Models (LLMs), research on these models become challenging. To overcome these challenges multiple prompting and tuning techniques are being used. In this paper, effectiveness of prompt engineering and parameter efficient fine tuning is being studied to summarize the Hospital Discharge Summary (HDS) papers effectively, so that these models can accurately interprete medical terminologies and contexts, generate brief but compact summaries, and draw out concentrated themes, which opens new approaches for the application of LLMs in healthcare and making HDS more patient-friendly. In this research LLaMA 2 (Large Language Model Meta AI) has been considered as the base model. Also, the model has been fine-tuned using QLoRA (Quantized Low Rank Adapters), which can bring down the memory usage of LLMs without compromising the data quality. This study explores the way to use LLMs on HDS datasets without the hassle of memory usage using QLoRA, into electronic health record systems to further streamline the handling and retrieval of healthcare information.
科研通智能强力驱动
Strongly Powered by AbleSci AI