Boosting Adversarial Attacks on Neural Networks with Better Optimizer

Convolutional neural networks have outperformed humans in image recognition tasks, but they remain vulnerable to attacks from adversarial examples. Since these data are crafted by adding imperceptible noise to normal images, their existence poses potential security threats to deep learning systems....

Full description

Saved in:
Bibliographic Details
Published in:Security and communication networks Vol. 2021; pp. 1 - 9
Main Authors: Yin, Heng, Zhang, Hengwei, Wang, Jindong, Dou, Ruiyu
Format: Journal Article
Language:English
Published: London Hindawi 07.06.2021
John Wiley & Sons, Inc
Subjects:
ISSN:1939-0114, 1939-0122
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Convolutional neural networks have outperformed humans in image recognition tasks, but they remain vulnerable to attacks from adversarial examples. Since these data are crafted by adding imperceptible noise to normal images, their existence poses potential security threats to deep learning systems. Sophisticated adversarial examples with strong attack performance can also be used as a tool to evaluate the robustness of a model. However, the success rate of adversarial attacks can be further improved in black-box environments. Therefore, this study combines a modified Adam gradient descent algorithm with the iterative gradient-based attack method. The proposed Adam iterative fast gradient method is then used to improve the transferability of adversarial examples. Extensive experiments on ImageNet showed that the proposed method offers a higher attack success rate than existing iterative methods. By extending our method, we achieved a state-of-the-art attack success rate of 95.0% on defense models.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1939-0114
1939-0122
DOI:10.1155/2021/9983309