修剪
人气
计算机科学
语言模型
钥匙(锁)
刮擦
航程(航空)
机器学习
人工智能
工程类
程序设计语言
心理学
生物
社会心理学
计算机安全
航空航天工程
农学
作者
Mengzhou Xia,Tianyu Gao,Zhiyuan Zeng,Danqi Chen
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:8
标识
DOI:10.48550/arxiv.2310.06694
摘要
The popularity of LLaMA (Touvron et al., 2023a;b) and other recently emerged moderate-sized large language models (LLMs) highlights the potential of building smaller yet powerful LLMs. Regardless, the cost of training such models from scratch on trillions of tokens remains high. In this work, we study structured pruning as an effective means to develop smaller LLMs from pre-trained, larger models. Our approach employs two key techniques: (1) targeted structured pruning, which prunes a larger model to a specified target shape by removing layers, heads, and intermediate and hidden dimensions in an end-to-end manner, and (2) dynamic batch loading, which dynamically updates the composition of sampled data in each training batch based on varying losses across different domains. We demonstrate the efficacy of our approach by presenting the Sheared-LLaMA series, pruning the LLaMA2-7B model down to 1.3B and 2.7B parameters. Sheared-LLaMA models outperform state-of-the-art open-source models of equivalent sizes, such as Pythia, INCITE, and OpenLLaMA models, on a wide range of downstream and instruction tuning evaluations, while requiring only 3% of compute compared to training such models from scratch. This work provides compelling evidence that leveraging existing LLMs with structured pruning is a far more cost-effective approach for building smaller LLMs.
科研通智能强力驱动
Strongly Powered by AbleSci AI