记忆
计算机科学
多样性(控制论)
组分(热力学)
价值(数学)
人工智能
芯(光纤)
自然语言处理
机器学习
认知心理学
心理学
电信
热力学
物理
作者
Charith Peris,Christophe Dupuy,Jimit Majmudar,Rahil Parikh,Sami Smaili,Richard S. Zemel,Rahul Gupta
标识
DOI:10.1145/3539597.3575792
摘要
Pretrained large language models (LLMs) have consistently shown state-of-the-art performance across multiple natural language processing (NLP) tasks. These models are of much interest for a variety of industrial applications that use NLP as a core component. However, LLMs have also been shown to memorize portions of their training data, which can contain private information. Therefore, when building and deploying LLMs, it is of value to apply privacy-preserving techniques that protect sensitive data.
科研通智能强力驱动
Strongly Powered by AbleSci AI