计算机科学
稳健性(进化)
注释
微调
人工智能
机器学习
一般化
鉴定(生物学)
数据挖掘
数学
植物
量子力学
生物化学
生物
基因
物理
数学分析
化学
作者
Yuhang Liu,Tianhao Li,Zixuan Wang,Guiquan Zhu,Yongqing Zhang,Quan Zou
标识
DOI:10.1109/bibm58861.2023.10385599
摘要
Accurate identification of cell types is a pivotal and intricate task in scRNA-seq data analysis. Recently, significant strides have been made in cell type annotation of scRNA-seq data using pre-trained language models (PLMs). This method has surmounted the constraints of conventional approaches regarding precision, robustness, and generalization. However, the fine-tuning process of large-scale pre-trained models incurs substantial computational expenses. To tackle this issue, a promising avenue of research has emerged, proposing parameter-efficient fine-tuning techniques for PLMs. These techniques concentrate on fine-tuning only a small portion of the model parameters while attaining comparable performance. In this study, we extensively research parameter-efficient fine-tuning methods for scRNA-seq cell type annotation, employing scBERT as the backbone. We scrutinize the performance and compatibility of various parameter-efficient fine-tuning methodologies across multiple datasets. Through comprehensive analysis, we demonstrate the remarkable performance of parameter-efficient fine-tuning methods in cell type annotation. Hopefully, this study can inspire new thinking in analyzing scRNA-seq data.
科研通智能强力驱动
Strongly Powered by AbleSci AI