Wild patterns: Ten years after the rise of adversarial machine learning

•We provide a detailed review of the evolution of adversarial machine learning over the last ten years.•We start from pioneering work up to more recent work aimed at understanding the security properties of deep learning algorithms.•We review work in the context of different applications.•We highlig...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Pattern recognition Ročník 84; s. 317 - 331
Hlavní autoři: Biggio, Battista, Roli, Fabio
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.12.2018
Témata:
ISSN:0031-3203, 1873-5142
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•We provide a detailed review of the evolution of adversarial machine learning over the last ten years.•We start from pioneering work up to more recent work aimed at understanding the security properties of deep learning algorithms.•We review work in the context of different applications.•We highlight common misconceptions related to the evaluation of the security of machinelearning and pattern recognition algorithms.•We discuss the main limitations of current work, along with the corresponding future research paths towards designing more secure learning algorithms. Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2018.07.023