Understanding adversarial attacks on deep learning based medical image analysis systems

•Medical image DNNs are easier to be attacked than natural non-medical image DNNs.•Complex biological textures of medical images may lead to more vulnerable regions.•State-of-the-art deep networks can be overparameterized for medical imaging tasks.•Medical image adversarial attacks can also be easil...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern recognition Jg. 110; S. 107332
Hauptverfasser: Ma, Xingjun, Niu, Yuhao, Gu, Lin, Wang, Yisen, Zhao, Yitian, Bailey, James, Lu, Feng
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Ltd 01.02.2021
Schlagworte:
ISSN:0031-3203, 1873-5142
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Medical image DNNs are easier to be attacked than natural non-medical image DNNs.•Complex biological textures of medical images may lead to more vulnerable regions.•State-of-the-art deep networks can be overparameterized for medical imaging tasks.•Medical image adversarial attacks can also be easily detected.•High detectability may be caused by perturbations outside the pathological regions. Deep neural networks (DNNs) have become popular for medical image analysis tasks like cancer diagnosis and lesion detection. However, a recent study demonstrates that medical deep learning systems can be compromised by carefully-engineered adversarial examples/attacks with small imperceptible perturbations. This raises safety concerns about the deployment of these systems in clinical settings. In this paper, we provide a deeper understanding of adversarial examples in the context of medical images. We find that medical DNN models can be more vulnerable to adversarial attacks compared to models for natural images, according to two different viewpoints. Surprisingly, we also find that medical adversarial attacks can be easily detected, i.e., simple detectors can achieve over 98% detection AUC against state-of-the-art attacks, due to fundamental feature differences compared to normal examples. We believe these findings may be a useful basis to approach the design of more explainable and secure medical deep learning systems.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2020.107332