计算机科学
物理不可克隆功能
冗余(工程)
嵌入式系统
密码学
循环冗余校验
钥匙(锁)
认证(法律)
计算机安全
人工智能
操作系统
网络数据包
作者
Elena Dubrova,Oscar Naslund,Bernhard Degen,Anders Gawell,Yang Yu
标识
DOI:10.1109/eurospw.2019.00036
摘要
Adversarial machine learning is an emerging threat to security of Machine Learning (ML)-based systems. However, we can potentially use it as a weapon against ML-based attacks. In this paper, we focus on protecting Physical Unclonable Functions (PUFs) against ML-based modeling attacks. PUFs are an important cryptographic primitive for secret key generation and challenge-response authentication. However, none of the existing PUF constructions are both ML attack resistant and sufficiently lightweight to fit low-end embedded devices. We present a lightweight PUF construction, CRC-PUF, in which input challenges are de-synchronized from output responses to make a PUF model difficult to learn. The de-synchronization is done by an input transformation based on a Cyclic Redundancy Check (CRC). By changing the CRC generator polynomial for each new response, we assure that success probability of recovering the transformed challenge is at most 2-86 for 128-bit challenges and responses.
科研通智能强力驱动
Strongly Powered by AbleSci AI