计算机科学
生物医学文本挖掘
术语
领域(数学分析)
数据科学
人工智能
健康信息学
分类学(生物学)
自然语言处理
医疗保健
文本挖掘
语言学
数学分析
经济增长
哲学
植物
数学
经济
生物
作者
Benyou Wang,Qianqian Xie,Jiahuan Pei,Zhihong Chen,Prayag Tiwari,Zhao Li,Jie Fu
摘要
Pre-trained language models (PLMs) have been the de facto paradigm for most natural language processing tasks. This also benefits the biomedical domain: researchers from informatics, medicine, and computer science communities propose various PLMs trained on biomedical datasets, e.g., biomedical text, electronic health records, protein, and DNA sequences for various biomedical tasks. However, the cross-discipline characteristics of biomedical PLMs hinder their spreading among communities; some existing works are isolated from each other without comprehensive comparison and discussions. It is nontrivial to make a survey that not only systematically reviews recent advances in biomedical PLMs and their applications but also standardizes terminology and benchmarks. This article summarizes the recent progress of pre-trained language models in the biomedical domain and their applications in downstream biomedical tasks. Particularly, we discuss the motivations of PLMs in the biomedical domain and introduce the key concepts of pre-trained language models. We then propose a taxonomy of existing biomedical PLMs that categorizes them from various perspectives systematically. Plus, their applications in biomedical downstream tasks are exhaustively discussed, respectively. Last, we illustrate various limitations and future trends, which aims to provide inspiration for the future research.
科研通智能强力驱动
Strongly Powered by AbleSci AI