Analysis of the vulnerability of YOLO neural network models to the Fast Sign Gradient Method attack
The analysis of formalized conditions for creating universal images falsely classified by computer vision algorithms, called adversarial examples, on YOLO neural network models is presented. The pattern of successful creation of a universal destructive image depending on the generated dataset on whi...
Saved in:
| Published in: | Nauchno-tekhnicheskiĭ vestnik informat͡s︡ionnykh tekhnologiĭ, mekhaniki i optiki Vol. 24; no. 6; pp. 1066 - 1070 |
|---|---|
| Main Authors: | , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
ITMO University
01.12.2024
|
| Subjects: | |
| ISSN: | 2226-1494, 2500-0373 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | The analysis of formalized conditions for creating universal images falsely classified by computer vision algorithms, called adversarial examples, on YOLO neural network models is presented. The pattern of successful creation of a universal destructive image depending on the generated dataset on which neural networks were trained using the Fast Sign Gradient Method attack is identified and studied. The specified pattern is demonstrated for YOLO8, YOLO9, YOLO10, YOLO11 classifier models trained on the standard COCO dataset. |
|---|---|
| ISSN: | 2226-1494 2500-0373 |
| DOI: | 10.17586/2226-1494-2024-24-6-1066-1070 |