DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks

State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been pr...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) s. 2574 - 2582
Hlavní autoři: Moosavi-Dezfooli, Seyed-Mohsen, Fawzi, Alhussein, Frossard, Pascal
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.06.2016
Témata:
ISSN:1063-6919
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.
ISSN:1063-6919
DOI:10.1109/CVPR.2016.282