计算机科学
安全性令牌
模仿
语言模型
编码(集合论)
知识产权
财产(哲学)
程序设计语言
计算机安全
人工智能
操作系统
心理学
社会心理学
认识论
哲学
集合(抽象数据类型)
作者
Zongjie Li,Chaozheng Wang,Shuai Wang,Cuiyun Gao
标识
DOI:10.1145/3576915.3623120
摘要
The rise of large language model-based code generation (LLCG) has enabled various commercial services and APIs. Training LLCG models is often expensive and time-consuming, and the training data are often large-scale and even inaccessible to the public. As a result, the risk of intellectual property (IP) theft over the LLCG models (e.g., via imitation attacks) has been a serious concern. In this paper, we propose the first watermark (WM) technique to protect LLCG APIs from remote imitation attacks. Our proposed technique is based on replacing tokens in an LLCG output with their "synonyms" available in the programming language. A WM is thus defined as the stealthily tweaked distribution among token synonyms in LLCG outputs. We design six WM schemes (instantiated into over 30 WM passes) which rely on conceptually distinct token synonyms available in programming languages. Moreover, to check the IP of a suspicious model (decide if it is stolen from our protected LLCG API), we propose a statistical tests-based procedure that can directly check a remote, suspicious LLCG API.
科研通智能强力驱动
Strongly Powered by AbleSci AI