Understanding adversarial attacks on deep learning based medical image analysis systems

•Medical image DNNs are easier to be attacked than natural non-medical image DNNs.•Complex biological textures of medical images may lead to more vulnerable regions.•State-of-the-art deep networks can be overparameterized for medical imaging tasks.•Medical image adversarial attacks can also be easil...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Pattern recognition Ročník 110; s. 107332
Hlavní autoři: Ma, Xingjun, Niu, Yuhao, Gu, Lin, Wang, Yisen, Zhao, Yitian, Bailey, James, Lu, Feng
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.02.2021
Témata:
ISSN:0031-3203, 1873-5142
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•Medical image DNNs are easier to be attacked than natural non-medical image DNNs.•Complex biological textures of medical images may lead to more vulnerable regions.•State-of-the-art deep networks can be overparameterized for medical imaging tasks.•Medical image adversarial attacks can also be easily detected.•High detectability may be caused by perturbations outside the pathological regions. Deep neural networks (DNNs) have become popular for medical image analysis tasks like cancer diagnosis and lesion detection. However, a recent study demonstrates that medical deep learning systems can be compromised by carefully-engineered adversarial examples/attacks with small imperceptible perturbations. This raises safety concerns about the deployment of these systems in clinical settings. In this paper, we provide a deeper understanding of adversarial examples in the context of medical images. We find that medical DNN models can be more vulnerable to adversarial attacks compared to models for natural images, according to two different viewpoints. Surprisingly, we also find that medical adversarial attacks can be easily detected, i.e., simple detectors can achieve over 98% detection AUC against state-of-the-art attacks, due to fundamental feature differences compared to normal examples. We believe these findings may be a useful basis to approach the design of more explainable and secure medical deep learning systems.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2020.107332