计算机科学
任务(项目管理)
编码(集合论)
软件
软件工程
软件开发
数据科学
人工智能
人机交互
知识管理
程序设计语言
工程类
系统工程
集合(抽象数据类型)
作者
Zeju Cai,Jianguo Chen,Wenqing Chen,Weicheng Wang,Xiangyuan Zhu,Aijia Ouyang
标识
DOI:10.1145/3639478.3643533
摘要
Large Language Models (LLMs) have revolutionized code intelligence tasks, but their performance in specific software development tasks often requires fine-tuning with task-specific data. However, acquiring such data is challenging due to privacy concerns. We introduce F-CodeLLM, a novel federated learning framework for adapting LLMs to software development tasks while preserving code data privacy. Leveraging federated learning and LoRA-based efficient fine-tuning, F-CodeLLM allows organizations to collaboratively improve LLMs without sharing sensitive data. Our experiments demonstrate that F-CodeLLM achieves comparable results to centralized fine-tuning methods and excels in multi-language environments, marking a significant advancement in the application of LLMs for software engineering.
科研通智能强力驱动
Strongly Powered by AbleSci AI