计算机科学
语言模型
变压器
编码器
人工智能
自然语言处理
任务(项目管理)
机器学习
语音识别
工程类
电气工程
经济
管理
电压
操作系统
作者
Chi Sun,Xipeng Qiu,Yige Xu,Xuanjing Huang
标识
DOI:10.1007/978-3-030-32381-3_16
摘要
Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI