When Random Is Bad: Selective CRPs for Protecting PUFs Against Modeling Attacks
Gespeichert in:
| Titel: | When Random Is Bad: Selective CRPs for Protecting PUFs Against Modeling Attacks |
|---|---|
| Autoren: | Mieszko Ferens, Edlira Dushku, Sokol Kosta |
| Quelle: | Ferens, M, Dushku, E & Kosta, S 2025, 'When Random is Bad : Selective CRPs for Protecting PUFs against Modeling Attacks', IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 44, no. 5, pp. 1648-1661. https://doi.org/10.1109/TCAD.2024.3506217 IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |
| Verlagsinformationen: | Institute of Electrical and Electronics Engineers (IEEE), 2025. |
| Publikationsjahr: | 2025 |
| Schlagwörter: | IoT, Hardware security, Physical Unclonable Function, physical unclonable function (PUF), hardware security, selective challenge-response pair (CRP), selective CRP, modeling attack, PUF |
| Beschreibung: | Resource-constrains are a significant challenge when designing secure IoT devices. To address this problem, the physical unclonable function (PUF) has been proposed as a lightweight security primitive capable of hardware fingerprinting. PUFs can provide device identification capabilities by exploiting random manufacturing variations, which can be used for authentication with a verifier that identifies a device through challenge-response interactions with its PUF. However, extensive research has shown that PUFs are inherently vulnerable to machine learning (ML) modeling attacks. Such attacks use challenge-response samples to train ML algorithms to learn the underlying parameters that define the physical PUF. In this article, we present a defensive technique to be used by the verifier called selective challenge-response pairs (CRPs). We propose generating challenges selectively, instead of randomly, to negatively affect the parameters of ML models trained by attackers. Specifically, we provide three methods: 1) binary-coded with padding (BP); 2) random shifted pattern (RSP); and 3) binary shifted pattern (BSP). We characterize them based on Hamming distance patterns, and evaluate their applicability based on their effect on the uniqueness, uniformity, and reliability of the underlying PUF implementation. Furthermore, we analyze and compare their resilience to ML modeling with the traditional random challenges on the well-studied XOR PUF, feed-forward PUF, and lightweight secure PUF, showing improved resilience of up to 2 times the number of CRPs. Finally, we suggest using our method on the interpose PUF to counter reliability-based attacks which can overcome selective CRPs and show that up to 4 times the number of CRPs can be exchanged securely. |
| Publikationsart: | Article |
| Dateibeschreibung: | application/pdf |
| ISSN: | 1937-4151 0278-0070 |
| DOI: | 10.1109/tcad.2024.3506217 |
| Zugangs-URL: | https://vbn.aau.dk/da/publications/f5afe4dc-0c28-4c0c-a871-762a51c4a715 http://www.scopus.com/inward/record.url?scp=105003697915&partnerID=8YFLogxK https://doi.org/10.1109/TCAD.2024.3506217 https://vbn.aau.dk/ws/files/755415258/acceptedManuscript.pdf |
| Rights: | IEEE Copyright |
| Dokumentencode: | edsair.doi.dedup.....3dffb2eb20f77e1859f79915a2285d39 |
| Datenbank: | OpenAIRE |
| Abstract: | Resource-constrains are a significant challenge when designing secure IoT devices. To address this problem, the physical unclonable function (PUF) has been proposed as a lightweight security primitive capable of hardware fingerprinting. PUFs can provide device identification capabilities by exploiting random manufacturing variations, which can be used for authentication with a verifier that identifies a device through challenge-response interactions with its PUF. However, extensive research has shown that PUFs are inherently vulnerable to machine learning (ML) modeling attacks. Such attacks use challenge-response samples to train ML algorithms to learn the underlying parameters that define the physical PUF. In this article, we present a defensive technique to be used by the verifier called selective challenge-response pairs (CRPs). We propose generating challenges selectively, instead of randomly, to negatively affect the parameters of ML models trained by attackers. Specifically, we provide three methods: 1) binary-coded with padding (BP); 2) random shifted pattern (RSP); and 3) binary shifted pattern (BSP). We characterize them based on Hamming distance patterns, and evaluate their applicability based on their effect on the uniqueness, uniformity, and reliability of the underlying PUF implementation. Furthermore, we analyze and compare their resilience to ML modeling with the traditional random challenges on the well-studied XOR PUF, feed-forward PUF, and lightweight secure PUF, showing improved resilience of up to 2 times the number of CRPs. Finally, we suggest using our method on the interpose PUF to counter reliability-based attacks which can overcome selective CRPs and show that up to 4 times the number of CRPs can be exchanged securely. |
|---|---|
| ISSN: | 19374151 02780070 |
| DOI: | 10.1109/tcad.2024.3506217 |
Full Text Finder
Nájsť tento článok vo Web of Science