计算机科学
秩(图论)
初始化
推论
过程(计算)
一般化
适应(眼睛)
代表(政治)
机器学习
人工智能
理论计算机科学
数学
数学分析
物理
组合数学
政治
法学
政治学
光学
程序设计语言
操作系统
作者
Ning Ding,Xue-Chuan Lv,Qiaosen Wang,Yulin Chen,Ligang Wu,Zhiyuan Liu,Maosong Sun
标识
DOI:10.18653/v1/2023.emnlp-main.252
摘要
Fine-tuning pre-trained large language models in a parameter-efficient manner is widely studied for its effectiveness and efficiency. The popular method of low-rank adaptation (LoRA) offers a notable approach, hypothesizing that the adaptation process is intrinsically low-dimensional. Although LoRA has demonstrated commendable performance, it is implemented with a fixed and unalterable intrinsic rank that might not always be the ideal choice. Recognizing the need for more flexible adaptation, we extend the methodology of LoRA to an innovative approach we call sparse low-rank adaptation (SoRA) that enables dynamic adjustments to the intrinsic rank during the adaptation process. We achieve this through the incorporation of a gate unit optimized with proximal gradient method in the training stage, controlling the cardinality of rank under the sparsity of the gate. In the subsequent inference stage, we eliminate the parameter blocks corresponding to the zeroed-out ranks, to reduce each SoRA module back to a concise yet rank-optimal LoRA. Our approach strengthens the representation power of LoRA by initializing it with a higher rank, while efficiently taming a temporarily increased number of parameters via updating in a sparse way. We further introduce a sparsifying scheduler for SoRA, aiming to examine the impact of the number of non-zero parameters on the model’s memorization and generalization. Our experimental results demonstrate that SoRA can outperform other baselines even with 70% retained parameters and 70% training time.
科研通智能强力驱动
Strongly Powered by AbleSci AI