Traversing the subspace of adversarial patches
Saved in:
| Title: | Traversing the subspace of adversarial patches |
|---|---|
| Authors: | Jens Bayer, Stefan Becker, David Münch, Michael Arens, Jürgen Beyerer |
| Source: | Machine Vision and Applications, 36 (3), 70 |
| Publication Status: | Preprint |
| Publisher Information: | Springer Science and Business Media LLC, 2025. |
| Publication Year: | 2025 |
| Subject Terms: | ddc:004, FOS: Computer and information sciences, Manifold learning, Object detection, Computer Vision and Pattern Recognition (cs.CV), DATA processing & computer science, Adversarial attacks, Computer Science - Computer Vision and Pattern Recognition, Adversarial patches |
| Description: | Despite ongoing research on the topic of adversarial examples in deep learning for computer vision, some fundamentals of the nature of these attacks remain unclear. As the manifold hypothesis posits, high-dimensional data tends to be part of a low-dimensional manifold. To verify the thesis with adversarial patches–a special form of adversarial attack that can be used to fool object detectors in the physical world–this paper provides an analysis of a set of adversarial patches and investigates the reconstruction abilities of five different dimensionality reduction methods. Quantitatively, the performance of reconstructed patches in an attack setting is measured and the impact of sampled patches from the latent space during adversarial training is investigated. The evaluation is performed on two publicly available datasets for person detection. The results indicate that more sophisticated dimensionality reduction methods offer no advantages over a simple principal component analysis. |
| Document Type: | Article |
| File Description: | application/pdf |
| Language: | English |
| ISSN: | 1432-1769 0932-8092 |
| DOI: | 10.1007/s00138-025-01689-6 |
| DOI: | 10.5445/ir/1000181454 |
| DOI: | 10.48550/arxiv.2412.01527 |
| DOI: | 10.24406/publica-4642 |
| Access URL: | http://arxiv.org/abs/2412.01527 https://publikationen.bibliothek.kit.edu/1000181454/159713750 https://publikationen.bibliothek.kit.edu/1000181454 https://doi.org/10.5445/IR/1000181454 |
| Rights: | CC BY arXiv Non-Exclusive Distribution |
| Accession Number: | edsair.doi.dedup.....536db562c1e224e24541f37eba72f23c |
| Database: | OpenAIRE |
| Abstract: | Despite ongoing research on the topic of adversarial examples in deep learning for computer vision, some fundamentals of the nature of these attacks remain unclear. As the manifold hypothesis posits, high-dimensional data tends to be part of a low-dimensional manifold. To verify the thesis with adversarial patches–a special form of adversarial attack that can be used to fool object detectors in the physical world–this paper provides an analysis of a set of adversarial patches and investigates the reconstruction abilities of five different dimensionality reduction methods. Quantitatively, the performance of reconstructed patches in an attack setting is measured and the impact of sampled patches from the latent space during adversarial training is investigated. The evaluation is performed on two publicly available datasets for person detection. The results indicate that more sophisticated dimensionality reduction methods offer no advantages over a simple principal component analysis. |
|---|---|
| ISSN: | 14321769 09328092 |
| DOI: | 10.1007/s00138-025-01689-6 |
Full Text Finder
Nájsť tento článok vo Web of Science