计算机科学
数理经济学
认知科学
牙石(牙科)
语言学
人工智能
心理学
数学
哲学
医学
牙科
作者
Zhihong Shao,Peiyi Wang,Qihao Zhu,Runxin Xu,Junxiao Song,Mingchuan Zhang,Y. K. Li,Yingjun Wu,Daya Guo
出处
期刊:Cornell University - arXiv
日期:2024-02-05
被引量:4
标识
DOI:10.48550/arxiv.2402.03300
摘要
Mathematical reasoning poses a significant challenge for language models due to its complex and structured nature. In this paper, we introduce DeepSeekMath 7B, which continues pre-training DeepSeek-Coder-Base-v1.5 7B with 120B math-related tokens sourced from Common Crawl, together with natural language and code data. DeepSeekMath 7B has achieved an impressive score of 51.7% on the competition-level MATH benchmark without relying on external toolkits and voting techniques, approaching the performance level of Gemini-Ultra and GPT-4. Self-consistency over 64 samples from DeepSeekMath 7B achieves 60.9% on MATH. The mathematical reasoning capability of DeepSeekMath is attributed to two key factors: First, we harness the significant potential of publicly available web data through a meticulously engineered data selection pipeline. Second, we introduce Group Relative Policy Optimization (GRPO), a variant of Proximal Policy Optimization (PPO), that enhances mathematical reasoning abilities while concurrently optimizing the memory usage of PPO.
科研通智能强力驱动
Strongly Powered by AbleSci AI