计算机科学
领域(数学分析)
语言模型
数据科学
质量(理念)
数据建模
情报检索
数据模型(GIS)
数据挖掘
人工智能
自然语言处理
数据库
数学分析
哲学
数学
认识论
作者
Jiaxi Cui,Zongjian Li,Yan Yang,Bohua Chen,Yuan Li
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:60
标识
DOI:10.48550/arxiv.2306.16092
摘要
Large Language Models (LLMs) have shown the potential to revolutionize natural language processing tasks in various domains, sparking great interest in vertical-specific large models. However, unlike proprietary models such as BloombergGPT and FinGPT, which have leveraged their unique data accumulations to make strides in the finance domain, there hasn't not many similar large language models in the Chinese legal domain to facilitate its digital transformation. In this paper, we propose an open-source legal large language model named ChatLaw. Due to the importance of data quality, we carefully designed a legal domain fine-tuning dataset. Additionally, to overcome the problem of model hallucinations in legal data screening during reference data retrieval, we introduce a method that combines vector database retrieval with keyword retrieval to effectively reduce the inaccuracy of relying solely on vector database retrieval. Furthermore, we propose a self-attention method to enhance the ability of large models to overcome errors present in reference data, further optimizing the issue of model hallucinations at the model level and improving the problem-solving capabilities of large models. We also open-sourced our model and part of the data at https://github.com/PKU-YuanGroup/ChatLaw.
科研通智能强力驱动
Strongly Powered by AbleSci AI