Boosting Adversarial Attacks on Neural Networks with Better Optimizer

Convolutional neural networks have outperformed humans in image recognition tasks, but they remain vulnerable to attacks from adversarial examples. Since these data are crafted by adding imperceptible noise to normal images, their existence poses potential security threats to deep learning systems....

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Security and communication networks Ročník 2021; s. 1 - 9
Hlavní autori: Yin, Heng, Zhang, Hengwei, Wang, Jindong, Dou, Ruiyu
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: London Hindawi 07.06.2021
John Wiley & Sons, Inc
Predmet:
ISSN:1939-0114, 1939-0122
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Convolutional neural networks have outperformed humans in image recognition tasks, but they remain vulnerable to attacks from adversarial examples. Since these data are crafted by adding imperceptible noise to normal images, their existence poses potential security threats to deep learning systems. Sophisticated adversarial examples with strong attack performance can also be used as a tool to evaluate the robustness of a model. However, the success rate of adversarial attacks can be further improved in black-box environments. Therefore, this study combines a modified Adam gradient descent algorithm with the iterative gradient-based attack method. The proposed Adam iterative fast gradient method is then used to improve the transferability of adversarial examples. Extensive experiments on ImageNet showed that the proposed method offers a higher attack success rate than existing iterative methods. By extending our method, we achieved a state-of-the-art attack success rate of 95.0% on defense models.
AbstractList Convolutional neural networks have outperformed humans in image recognition tasks, but they remain vulnerable to attacks from adversarial examples. Since these data are crafted by adding imperceptible noise to normal images, their existence poses potential security threats to deep learning systems. Sophisticated adversarial examples with strong attack performance can also be used as a tool to evaluate the robustness of a model. However, the success rate of adversarial attacks can be further improved in black-box environments. Therefore, this study combines a modified Adam gradient descent algorithm with the iterative gradient-based attack method. The proposed Adam iterative fast gradient method is then used to improve the transferability of adversarial examples. Extensive experiments on ImageNet showed that the proposed method offers a higher attack success rate than existing iterative methods. By extending our method, we achieved a state-of-the-art attack success rate of 95.0% on defense models.
Author Zhang, Hengwei
Dou, Ruiyu
Wang, Jindong
Yin, Heng
Author_xml – sequence: 1
  givenname: Heng
  orcidid: 0000-0003-3927-0932
  surname: Yin
  fullname: Yin, Heng
  organization: State Key Laboratory of Mathematical Engineering and Advanced ComputingZhengzhou 450001HenanChinalsec.cc.ac.cn
– sequence: 2
  givenname: Hengwei
  orcidid: 0000-0002-1649-7336
  surname: Zhang
  fullname: Zhang, Hengwei
  organization: State Key Laboratory of Mathematical Engineering and Advanced ComputingZhengzhou 450001HenanChinalsec.cc.ac.cn
– sequence: 3
  givenname: Jindong
  orcidid: 0000-0002-7641-9014
  surname: Wang
  fullname: Wang, Jindong
  organization: State Key Laboratory of Mathematical Engineering and Advanced ComputingZhengzhou 450001HenanChinalsec.cc.ac.cn
– sequence: 4
  givenname: Ruiyu
  orcidid: 0000-0001-6231-1582
  surname: Dou
  fullname: Dou, Ruiyu
  organization: State Key Laboratory of Mathematical Engineering and Advanced ComputingZhengzhou 450001HenanChinalsec.cc.ac.cn
BookMark eNp9kE1PAjEURRuDiYDu_AGTuNSRfk6nSyD4kRDY6LrpdDpShBbbAtFf7xCICxNdvZebc99LTg90nHcGgGsE7xFibIAhRgMhSkKgOANdJIjIIcK487MjegF6MS4hLBDltAsmI-9jsu4tG9Y7E6IKVq2yYUpKv8fMu2xmtqFNZibtfWijvU2LbGRSMiGbb5Jd2y8TLsF5o1bRXJ1mH7w-TF7GT_l0_vg8Hk5zTQhPOSNMN7ooayFqrLCqiC40b5g2tKq04IpqA6uK4LrQZV1iaErEeFUUWDPOISZ9cHO8uwn-Y2tikku_Da59KTGjBCMKCW8pfKR08DEG00htk0rWuxSUXUkE5UGXPOiSJ11t6e5XaRPsWoXPv_DbI76wrlZ7-z_9DV-EeYI
CitedBy_id crossref_primary_10_1016_j_ins_2022_07_157
crossref_primary_10_1155_2023_3427385
crossref_primary_10_3390_electronics12061464
crossref_primary_10_1016_j_jisa_2022_103227
Cites_doi 10.3390/electronics9111957
10.1609/aaai.v31i1.11231
10.1109/CVPR.2016.308
10.1109/CVPR.2016.90
10.1109/CVPR.2015.7298594
10.1145/3052973.3053009
10.1007/978-3-319-46493-0_38
10.1145/3065386
10.1145/3321707.3321749
10.1109/CVPR.2019.00444
10.3390/electronics9101634
ContentType Journal Article
Copyright Copyright © 2021 Heng Yin et al.
Copyright © 2021 Heng Yin et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0
Copyright_xml – notice: Copyright © 2021 Heng Yin et al.
– notice: Copyright © 2021 Heng Yin et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0
DBID RHU
RHW
RHX
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1155/2021/9983309
DatabaseName Hindawi Publishing Complete
Hindawi Publishing Subscription Journals
Hindawi Publishing Open Access
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
CrossRef
Database_xml – sequence: 1
  dbid: RHX
  name: Hindawi Publishing Open Access
  url: http://www.hindawi.com/journals/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1939-0122
Editor Chen, Ting
Editor_xml – sequence: 1
  givenname: Ting
  surname: Chen
  fullname: Chen, Ting
EndPage 9
ExternalDocumentID 10_1155_2021_9983309
GrantInformation_xml – fundername: National Key Research and Development Program of China
  grantid: 2017YFB0801900
GroupedDBID .4S
.DC
05W
0R~
123
1OC
3SF
4.4
52U
5DZ
66C
8-1
8UM
AAESR
AAFWJ
AAJEY
AAONW
ACGFO
ADBBV
ADIZJ
AENEX
AFBPY
AFKRA
AJXKR
ALMA_UNASSIGNED_HOLDINGS
ARAPS
ARCSS
ATUGU
AZVAB
BCNDV
BENPR
BGLVJ
BHBCM
BNHUX
BOGZA
BRXPI
CCPQU
CS3
DR2
DU5
EBS
EIS
F1Z
G-S
GROUPED_DOAJ
HCIFZ
HZ~
IAO
ICD
ITC
IX1
K7-
LITHE
MY.
MY~
NNB
O9-
OIG
OK1
P2P
PIMPY
RHU
RHW
RHX
TH9
TUS
W99
WBKPD
XV2
24P
AAMMB
AAYXX
ACCMX
ADMLS
AEFGJ
AGXDD
AIDQK
AIDYY
ALUQN
CITATION
H13
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
PUEGO
ID FETCH-LOGICAL-c337t-535cfc68d99d2a2ab3c6c7f5ce4bbc97a4ce0bb32d6c8d820e8157b662c577023
IEDL.DBID RHX
ISICitedReferencesCount 8
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000665877000004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1939-0114
IngestDate Sat Aug 23 12:39:45 EDT 2025
Tue Nov 18 22:32:12 EST 2025
Sat Nov 29 02:59:33 EST 2025
Sun Jun 02 19:15:46 EDT 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Language English
License This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
https://creativecommons.org/licenses/by/4.0
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c337t-535cfc68d99d2a2ab3c6c7f5ce4bbc97a4ce0bb32d6c8d820e8157b662c577023
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-3927-0932
0000-0002-7641-9014
0000-0001-6231-1582
0000-0002-1649-7336
OpenAccessLink https://dx.doi.org/10.1155/2021/9983309
PQID 2543214037
PQPubID 1046363
PageCount 9
ParticipantIDs proquest_journals_2543214037
crossref_citationtrail_10_1155_2021_9983309
crossref_primary_10_1155_2021_9983309
hindawi_primary_10_1155_2021_9983309
PublicationCentury 2000
PublicationDate 2021-06-07
PublicationDateYYYYMMDD 2021-06-07
PublicationDate_xml – month: 06
  year: 2021
  text: 2021-06-07
  day: 07
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
PublicationTitle Security and communication networks
PublicationYear 2021
Publisher Hindawi
John Wiley & Sons, Inc
Publisher_xml – name: Hindawi
– name: John Wiley & Sons, Inc
References J. M. Cohen (28) 2019
C. Szegedy (5)
D. P. Kingma (17) 2014
27
A. Kurakin (7) 2016
29
N. Papernot (21)
F. Tramèr (31) 2017
C. Guo (23) 2017
34
14
36
16
19
J. Lin (18)
A. Madry (20) 2018
N. Carlini (10)
C. Xie (9)
Y. Liu (11) 2016
D. Wierstra (13) 2014; 15
1
P. Samangouei (24) 2018
3
K. Simonyan (2) 2014
4
C. Mao (26) 2020
I. J. Goodfellow (6) 2014
A. I. Ilyas (12) 2018
C. Szegedy (35)
T. Tieleman (33) 2012
A. Kurakin (30) 2016
A. Levine (25)
W. Brendel (15) 2017
J. Buckman (22)
I. Sutskever (32)
Y. Dong (8)
References_xml – year: 2014
  ident: 17
  article-title: A method for stochastic optimization
– start-page: 39
  ident: 10
  article-title: Towards evaluating the robustness of neural networks
– start-page: 9185
  ident: 8
  article-title: Boosting adversarial attacks with momentum
– year: 2016
  ident: 11
  article-title: Delving into transferable adversarial examples and black-box attacks
– ident: 29
  doi: 10.3390/electronics9111957
– year: 2018
  ident: 24
  article-title: Defense-gan: protecting classifiers against adversarial attacks using generative models
– ident: 35
  article-title: Inception-v4, inception-ResNet and the impact of residual connections on learning
  doi: 10.1609/aaai.v31i1.11231
– year: 2020
  ident: 26
  article-title: Multitask learning strengthens adversarial robustness
– ident: 34
  doi: 10.1109/CVPR.2016.308
– year: 2016
  ident: 7
  article-title: Adversarial examples in the physical world
– ident: 4
  doi: 10.1109/CVPR.2016.90
– year: 2017
  ident: 31
  article-title: Ensemble adversarial training: attacks and defenses
– year: 2014
  ident: 2
  article-title: Very deep convolutional networks for large-scale image recognition
– ident: 3
  doi: 10.1109/CVPR.2015.7298594
– year: 2014
  ident: 6
  article-title: Explaining and harnessing adversarial examples
– ident: 16
  doi: 10.1145/3052973.3053009
– ident: 5
  article-title: Intriguing properties of neural networks
– ident: 36
  doi: 10.1007/978-3-319-46493-0_38
– year: 2019
  ident: 28
  article-title: Certified adversarial robustness via randomized smoothing
– start-page: 582
  ident: 21
  article-title: Distillation as a defense to adversarial perturbations against deep neural networks
– ident: 1
  doi: 10.1145/3065386
– start-page: 2725
  ident: 9
  article-title: Improving transferability of adversarial examples with input diversity
– volume: 15
  start-page: 949
  year: 2014
  ident: 13
  article-title: Natural evolution strategies
  publication-title: Journal of Machine Learning and Research
– ident: 14
  doi: 10.1145/3321707.3321749
– year: 2018
  ident: 20
  article-title: Towards deep learning models resistant to adversarial attacks
– ident: 22
  article-title: Thermometer encoding: one hot way to resist adversarial examples
– ident: 25
  article-title: Wasserstein smoothing: Certified robustness against wasserstein adversarial attacks
– ident: 18
  article-title: Nesterov accelerated gradient and scale invariance for adversarial attacks
– year: 2016
  ident: 30
  article-title: Adversarial machine learning at scale
– year: 2018
  ident: 12
  article-title: Black-box adversarial attacks with limited queries and information
– year: 2017
  ident: 15
  article-title: Decision-based adversarial attacks: reliable attacks against black-box machine learning models
– ident: 19
  doi: 10.1109/CVPR.2019.00444
– year: 2017
  ident: 23
  article-title: Countering adversarial images using input transforma-tions
– start-page: 1139
  ident: 32
  article-title: On the importance of initialization and momentum in deep learning
– year: 2012
  ident: 33
  article-title: Lecture 6.5-RMSProp, coursera: neural networks for machine learning
– ident: 27
  doi: 10.3390/electronics9101634
SSID ssj0061474
Score 2.2886395
Snippet Convolutional neural networks have outperformed humans in image recognition tasks, but they remain vulnerable to attacks from adversarial examples. Since these...
SourceID proquest
crossref
hindawi
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1
SubjectTerms Algorithms
Artificial neural networks
Iterative methods
Machine learning
Methods
Neural networks
Object recognition
Optimization
Title Boosting Adversarial Attacks on Neural Networks with Better Optimizer
URI https://dx.doi.org/10.1155/2021/9983309
https://www.proquest.com/docview/2543214037
Volume 2021
WOSCitedRecordID wos000665877000004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVWIB
  databaseName: Wiley Online Library Open Access
  customDbUrl:
  eissn: 1939-0122
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0061474
  issn: 1939-0114
  databaseCode: 24P
  dateStart: 20170101
  isFulltext: true
  titleUrlDefault: https://authorservices.wiley.com/open-science/open-access/browse-journals.html
  providerName: Wiley-Blackwell
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8NAEF5sUdCD-MRqLXuoJwk2m31kj6209FSLKPQWso9gQRtpooK_3p10I2oRPSZMcpjdzLdfZuYbhLqObQnJRRZYEqqApoYF0pooCEGIJXMRkUtVDZsQk0k8m8mpF0kq1lP4Du2AnodXjhU45i0bqBEzqNy6Hc_qgOsARvjkMZT-hLSub__x7Dfk2XoAyvs2XwvBFa6M9tCuPxDi_moF99GGXRygnS8ygYdoOMjzAsqTcTU-uUhh0-B-WUJ_PM4XGBQ23J3JqqS7wPBzFQ-qRh1842LC0_zdLo_Q_Wh4dz0O_PiDQEeRKAMWMZ1pHhspDUlJqiLNtciYtlQpLUVKte0pFRHDdWwckts4ZEJxTjQTwmHxMWou8oU9QTjLCDW2Bzk9S0NJVY8pG6fuKMMdizS6hS5r1yTaa4PDiIrHpOIIjCXgyMQ7soUuPq2fV5oYv9h1vZf_MGvXS5D4D6hIoEefgJagOP3fW87QNlxWVVyijZrl8sWeo039Ws6LZQc1CJ12qm3zAQA_uCA
linkProvider Hindawi Publishing
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Boosting+Adversarial+Attacks+on+Neural+Networks+with+Better+Optimizer&rft.jtitle=Security+and+communication+networks&rft.au=Yin%2C+Heng&rft.au=Zhang%2C+Hengwei&rft.au=Wang%2C+Jindong&rft.au=Dou%2C+Ruiyu&rft.date=2021-06-07&rft.pub=John+Wiley+%26+Sons%2C+Inc&rft.issn=1939-0114&rft.eissn=1939-0122&rft.volume=2021&rft_id=info:doi/10.1155%2F2021%2F9983309&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1939-0114&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1939-0114&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1939-0114&client=summon