Adaptive Shielding via Parametric Safety Proofs
Uloženo v:
| Název: | Adaptive Shielding via Parametric Safety Proofs |
|---|---|
| Autoři: | Yao Feng, Jun Zhu, André Platzer, Jonathan Laurent |
| Zdroj: | Proceedings of the ACM on Programming Languages, 9 (OOPSLA1), 816–843 |
| Publication Status: | Preprint |
| Informace o vydavateli: | Association for Computing Machinery (ACM), 2025. |
| Rok vydání: | 2025 |
| Témata: | FOS: Computer and information sciences, Computer Science - Programming Languages, ddc:000, Computer science, information & general works, Programming Languages (cs.PL) |
| Popis: | A major challenge to deploying cyber-physical systems with learning-enabled controllers is to ensure their safety, especially in the face of changing environments that necessitate runtime knowledge acquisition. Model-checking and automated reasoning have been successfully used for shielding, i.e., to monitor untrusted controllers and override potentially unsafe decisions, but only at the cost of hard tradeoffs in terms of expressivity, safety, adaptivity, precision and runtime efficiency. We propose a programming-language framework that allows experts to statically specify adaptive shields for learning-enabled agents, which enforce a safe control envelope that gets more permissive as knowledge is gathered at runtime. A shield specification provides a safety model that is parametric in the current agent's knowledge. In addition, a nondeterministic inference strategy can be specified using a dedicated domain-specific language, enforcing that such knowledge parameters are inferred at runtime in a statistically-sound way. By leveraging language design and theorem proving, our proposed framework empowers experts to design adaptive shields with an unprecedented level of modeling flexibility, while providing rigorous, end-to-end probabilistic safety guarantees. |
| Druh dokumentu: | Article |
| Popis souboru: | application/pdf |
| Jazyk: | English |
| ISSN: | 2475-1421 |
| DOI: | 10.1145/3720450 |
| DOI: | 10.48550/arxiv.2502.18879 |
| DOI: | 10.5445/ir/1000181652 |
| Přístupová URL adresa: | http://arxiv.org/abs/2502.18879 https://publikationen.bibliothek.kit.edu/1000181652/159917722 https://publikationen.bibliothek.kit.edu/1000181652 https://doi.org/10.5445/IR/1000181652 |
| Rights: | CC BY arXiv Non-Exclusive Distribution |
| Přístupové číslo: | edsair.doi.dedup.....0a75ce892c10f6df917e88f9f16c3870 |
| Databáze: | OpenAIRE |
| Abstrakt: | A major challenge to deploying cyber-physical systems with learning-enabled controllers is to ensure their safety, especially in the face of changing environments that necessitate runtime knowledge acquisition. Model-checking and automated reasoning have been successfully used for shielding, i.e., to monitor untrusted controllers and override potentially unsafe decisions, but only at the cost of hard tradeoffs in terms of expressivity, safety, adaptivity, precision and runtime efficiency. We propose a programming-language framework that allows experts to statically specify adaptive shields for learning-enabled agents, which enforce a safe control envelope that gets more permissive as knowledge is gathered at runtime. A shield specification provides a safety model that is parametric in the current agent's knowledge. In addition, a nondeterministic inference strategy can be specified using a dedicated domain-specific language, enforcing that such knowledge parameters are inferred at runtime in a statistically-sound way. By leveraging language design and theorem proving, our proposed framework empowers experts to design adaptive shields with an unprecedented level of modeling flexibility, while providing rigorous, end-to-end probabilistic safety guarantees. |
|---|---|
| ISSN: | 24751421 |
| DOI: | 10.1145/3720450 |
Full Text Finder
Nájsť tento článok vo Web of Science