计算机科学
炸薯条
领域(数学)
自然语言处理
数学
纯数学
电信
作者
Hongshun Ling,Bin Yin,Chengze Ge,PengTao Shi,Jie Wang,Fan Xian,Fuliang Quan
出处
期刊:Communications in computer and information science
日期:2024-01-01
卷期号:: 21-30
被引量:1
标识
DOI:10.1007/978-981-97-1717-0_2
摘要
This article introduces the research content and results based on the CHIP-PromptCBLUE (Chinese Biomedical Language Understanding Evaluation) benchmark task. PromptCBLUE promotes research on large language models for medicine. The benchmark can evaluate Chinese language models' multi-tasking abilities across various medical tasks, including 18 task types such as medical entity recognition, medical text classification, medical language inference, and medical content generation. It requires completing all tasks using just one large language model, necessitating efficient fine-tuning methods and keeping parameters within 1% of the model size. To address this, we propose a method. First, we greatly improved model performance through data augmentation. We then further amplified model capabilities using an innovative entity loss optimization of the large model's loss function. Using this method, we achieved a score of 71.3822 in the chip-PromptCBLUE general track. This research provides new ideas for advancing large language models in the medical field.
科研通智能强力驱动
Strongly Powered by AbleSci AI