计算机科学
源代码
编码(集合论)
自然语言处理
人工智能
程序设计语言
开源
自然语言
代表(政治)
软件
集合(抽象数据类型)
政治
政治学
法学
作者
Ke Liu,Guang Yang,Xiang Chen,Yanlin Zhou
标识
DOI:10.1145/3545258.3545260
摘要
With the development of deep learning and natural language processing techniques, the performance of many source code-related tasks can be improved by using pre-trained models. Of these pre-trained models, CodeBert is a bi-modal pre-trained model for programming languages and natural languages, which has been successfully used in current source code-related tasks. These previous studies mainly use the output vector of CodeBert’s last layer as the code semantic representation for fine-tuning downstream source code-related tasks. However, this setting may miss the valuable representational information, which may be captured by other layers of CodeBert.
科研通智能强力驱动
Strongly Powered by AbleSci AI