Analysis of the vulnerability of YOLO neural network models to the Fast Sign Gradient Method attack
The analysis of formalized conditions for creating universal images falsely classified by computer vision algorithms, called adversarial examples, on YOLO neural network models is presented. The pattern of successful creation of a universal destructive image depending on the generated dataset on whi...
Uloženo v:
| Vydáno v: | Nauchno-tekhnicheskiĭ vestnik informat͡s︡ionnykh tekhnologiĭ, mekhaniki i optiki Ročník 24; číslo 6; s. 1066 - 1070 |
|---|---|
| Hlavní autoři: | , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
ITMO University
01.12.2024
|
| Témata: | |
| ISSN: | 2226-1494, 2500-0373 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | The analysis of formalized conditions for creating universal images falsely classified by computer vision algorithms, called adversarial examples, on YOLO neural network models is presented. The pattern of successful creation of a universal destructive image depending on the generated dataset on which neural networks were trained using the Fast Sign Gradient Method attack is identified and studied. The specified pattern is demonstrated for YOLO8, YOLO9, YOLO10, YOLO11 classifier models trained on the standard COCO dataset. |
|---|---|
| ISSN: | 2226-1494 2500-0373 |
| DOI: | 10.17586/2226-1494-2024-24-6-1066-1070 |