Adversarial detection by approximation of ensemble boundary

Despite being effective in many application areas, Deep Neural Networks (DNNs) are vulnerable to being attacked. In object recognition, the attack takes the form of a small perturbation added to an image, that causes the DNN to misclassify, but to a human appears no different. Adversarial attacks le...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) Jg. 622; S. 129364
1. Verfasser: Windeatt, Terry
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier B.V 14.03.2025
Schlagworte:
ISSN:0925-2312
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Despite being effective in many application areas, Deep Neural Networks (DNNs) are vulnerable to being attacked. In object recognition, the attack takes the form of a small perturbation added to an image, that causes the DNN to misclassify, but to a human appears no different. Adversarial attacks lead to defences that are themselves subject to attack, and the attack/defence strategies provide important information about the properties of DNNs. In this paper, a novel method of detecting adversarial attacks is proposed for an ensemble of Deep Neural Networks (DNNs) solving two-class pattern recognition problems. The ensemble is combined using Walsh coefficients which are capable of approximating Boolean functions and thereby controlling the decision boundary complexity. The hypothesis in this paper is that decision boundaries with high curvature allow adversarial perturbations to be found, but change the curvature of the decision boundary, which is then approximated in a different way by Walsh coefficients compared to the clean images. Besides controlling boundary complexity, the coefficients also measure the correlation with class labels, which may aid in understanding the learning and transferability properties of DNNs. While the experiments here use images, the proposed approach of modelling two-class ensemble decision boundaries could in principle be applied to any application area.
AbstractList Despite being effective in many application areas, Deep Neural Networks (DNNs) are vulnerable to being attacked. In object recognition, the attack takes the form of a small perturbation added to an image, that causes the DNN to misclassify, but to a human appears no different. Adversarial attacks lead to defences that are themselves subject to attack, and the attack/defence strategies provide important information about the properties of DNNs. In this paper, a novel method of detecting adversarial attacks is proposed for an ensemble of Deep Neural Networks (DNNs) solving two-class pattern recognition problems. The ensemble is combined using Walsh coefficients which are capable of approximating Boolean functions and thereby controlling the decision boundary complexity. The hypothesis in this paper is that decision boundaries with high curvature allow adversarial perturbations to be found, but change the curvature of the decision boundary, which is then approximated in a different way by Walsh coefficients compared to the clean images. Besides controlling boundary complexity, the coefficients also measure the correlation with class labels, which may aid in understanding the learning and transferability properties of DNNs. While the experiments here use images, the proposed approach of modelling two-class ensemble decision boundaries could in principle be applied to any application area.
ArticleNumber 129364
Author Windeatt, Terry
Author_xml – sequence: 1
  givenname: Terry
  orcidid: 0000-0002-5058-9701
  surname: Windeatt
  fullname: Windeatt, Terry
  email: t.windeatt@surrey.ac.uk
  organization: University of Surrey, CVSSP, Guildford, Surrey, GU2 7XH, UK
BookMark eNp9j81KxDAcxHNYwd3VN_DQF2jNZ7tFEJZFXWHBi55DmvwDKW1Sku5i396u9expYIYZ5rdBKx88IPRAcEEwKR_bwsNZh76gmIqC0JqVfIXWuKYip4zQW7RJqcWYVHO2Rk97c4GYVHSqywyMoEcXfNZMmRqGGL5dr36NYDPwCfqmg6wJZ29UnO7QjVVdgvs_3aKv15fPwzE_fby9H_anXFMhxnyHuTDCcEwUr1UpeE1LbQU3StRCcVwJ0lSV4doyuquELomhhLESaqs0tYxtEV92dQwpRbByiPOvOEmC5RVatnKBlldouUDPteelBvO3i4Mok3bgNRgXZ0xpgvt_4AfaFmXE
Cites_doi 10.1016/j.neucom.2021.10.082
10.1007/s11071-023-08456-0
10.1109/TNNLS.2018.2861579
10.1109/JPROC.2023.3238024
10.1109/TNNLS.2013.2239659
10.1109/JPROC.2021.3050042
10.1088/1361-6501/ad0f6d
10.1007/s11042-022-14021-5
10.1109/TNNLS.2021.3089134
10.58496/BJML/2023/008
10.1109/COMST.2022.3233793
10.1007/s11042-020-09167-z
10.1109/TNN.2011.2159513
10.1007/s10462-021-10125-w
10.1109/CVPR.2017.17
10.1109/TNSE.2023.3240687
10.1109/TIFS.2019.2922398
10.1145/3594869
10.1145/3398394
10.1016/j.ins.2019.05.084
10.1016/j.neucom.2023.126498
10.1007/s11063-023-11189-1
10.1109/TNN.2006.875979
10.1109/TKDE.2020.2972320
10.1109/CVPR.2016.282
10.1016/j.neucom.2023.126576
10.1080/095400996116839
10.1109/TIFS.2022.3229595
10.1109/TIFS.2021.3082327
ContentType Journal Article
Copyright 2025
Copyright_xml – notice: 2025
DBID AAYXX
CITATION
DOI 10.1016/j.neucom.2025.129364
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
ExternalDocumentID 10_1016_j_neucom_2025_129364
S0925231225000360
GroupedDBID ---
--K
--M
.DC
.~1
0R~
123
1B1
1~.
1~5
4.4
457
4G.
53G
5VS
7-5
71M
8P~
9JM
9JN
AABNK
AACTN
AAEDT
AAEDW
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAXKI
AAXLA
AAXUO
AAYFN
ABBOA
ABCQJ
ABFNM
ABJNI
ABMAC
ACDAQ
ACGFS
ACRLP
ACZNC
ADBBV
ADEZE
AEBSH
AEIPS
AEKER
AENEX
AFJKZ
AFKWA
AFTJW
AFXIZ
AGHFR
AGUBO
AGWIK
AGYEJ
AHHHB
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
AKRWK
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
ANKPU
AOUOD
AXJTR
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EO8
EO9
EP2
EP3
F5P
FDB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
IHE
J1W
KOM
M41
MO0
MOBAO
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
ROL
RPZ
SDF
SDG
SDP
SES
SEW
SPC
SPCBC
SSN
SSV
SSZ
T5K
ZMT
~G-
29N
9DU
AAQXK
AATTM
AAYWO
AAYXX
ABWVN
ABXDB
ACLOT
ACNNM
ACRPL
ACVFH
ADCNI
ADJOM
ADMUD
ADNMO
AEUPX
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKYEP
APXCP
ASPBG
AVWKF
AZFZN
CITATION
EFKBS
EFLBG
EJD
FEDTE
FGOYB
HLZ
HVGLF
HZ~
LG9
R2-
SBC
WUQ
XPP
~HD
ID FETCH-LOGICAL-c255t-8045d5d401a49a654926cf54da595a40751b77d4cf32875c61d21336e9fac2f33
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001417399000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0925-2312
IngestDate Sat Nov 29 06:34:05 EST 2025
Sat Feb 15 15:52:13 EST 2025
IsPeerReviewed true
IsScholarly true
Keywords Boolean functions
Adversarial robustness
Deep neural networks
Security
Ensemble
Machine learning
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c255t-8045d5d401a49a654926cf54da595a40751b77d4cf32875c61d21336e9fac2f33
ORCID 0000-0002-5058-9701
ParticipantIDs crossref_primary_10_1016_j_neucom_2025_129364
elsevier_sciencedirect_doi_10_1016_j_neucom_2025_129364
PublicationCentury 2000
PublicationDate 2025-03-14
PublicationDateYYYYMMDD 2025-03-14
PublicationDate_xml – month: 03
  year: 2025
  text: 2025-03-14
  day: 14
PublicationDecade 2020
PublicationTitle Neurocomputing (Amsterdam)
PublicationYear 2025
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References Wood, Mu, Webb, Reeve, Lujan, Brown (b19) 2023; 24
Szegedy (b29) 2014
Schmidt, Santurkar, Tsipras, Talwar, Madry (b37) 2018; 31
Lin (b31) 2021; 33
Szűcs, Kiss (b24) 2023; 82
Ortiz-Jiménez, Modas, Moosavi-Dezfooli, Frossard (b5) 2021; 109
Biggio, Corona, Maiorca, Nelson, Šrndić, Laskov, ., Roli (b30) 2013
Fawzi, Moosavi-Dezfooli, Frossard (b38) 2016; 29
Qamar, Zardari (b6) 2023
Han, Lin, Shen, Wang, Guan (b2) 2023; 55
Sen, Ravindran, Raghunathan (b53) 2020
Zhang, Benz, Lin, Karjauv, Wu, Kweon (b34) 2021
Nguyen, Fernando, Fookes, Sridharan (b4) 2023
Windeatt (b20) 2006; 17
Goodfellow, Shlens, Szegedy (b32) 2015
Ilyas, Santurkar, Tsipras, Engstrom, Tran, Madry (b35) 2019; 32
Nicolae, Sinn, Tran, Buesser, Rawat, Wistuba, ., Edwards (b43) 2023
A. Shafahi, W.R. Huang, C. Studer, S. Feizi, T. Goldstein, Are adversarial examples inevitable?
Windeatt, Zor (b42) 2013; 24
.
Zhang, Xie, Li, Mei, Liu (b36) 2023; 111
LeCun (b55) 2023
Hurst, Miller, Muzio (b27) 1985
Song, Wu, Song, Stojanovic (b8) 2023; 55
Addesso, Barni, Di Mauro, Matta (b15) 2021; 16
Song, Sun, Song, Stojanovic (b10) 2023; 111
Craighero, Angaroni, Stella, Damiani, Antoniotti, Graudenzi (b1) 2023; 554
Tao, Shi, Qiu, Jin, Stojanovic (b11) 2023; 35
Kurakin, Goodfellow, Bengio (b48) 2018
Verma, Swami (b54) 2019; 32
Tikhonov, Arsenin (b26) 1977
Kwon, Kim, Yoon, Choi (b51) 2021; 80
S.M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, P. Frossard, Universal adversarial perturbations, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1765–1773.
W. He, J. Wei, X. Chen, N. Carlini, D. Song, Adversarial example defense: Ensembles of weak defenses are not strong, in: 11th USENIX Workshop on Offensive Technologies, WOOT 17, 2017.
Tramer, Carlini, Brendel, Madry (b17) 2020; 33
Aldahdooh, Hamidouche, Fezza, Déforges (b50) 2022; 55
Windeatt, Zor, Camgoz (b28) 2018; 30
Krizhevsky, Sutskever, Hinton (b56) 2012; 25
Madry (b46) 2019
Serban, Poll, Visser (b49) 2020; 53
Pang, Xu, Du, Chen, Zhu (b52) 2019
He, Kim, Asghar (b3) 2023; 25
Mijwel, Esen, Shamil (b7) 2023
Zhang, Zhu, Hussain, Ye, Zhou (b14) 2023; 18
S.M. Moosavi-Dezfooli, A. Fawzi, P. Frossard, Deepfool: a simple and accurate method to fool deep neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2574–2582.
Lian, Jia, Wu, Huang (b16) 2023; 10
Tou, Gonzales (b25) 1974
Crecchi, Melis, Sotgiu, Bacciu, Biggio (b39) 2022; 470
Deng, Mu (b22) 2024; 36
Windeatt (b18) 2008
Addesso, Cirillo, Di Mauro, Matta (b13) 2020; 15
Guo, Zhao, Li, Kuang, Zhang, Han, Tan (b23) 2019; 501
Chivukula, Yang, Liu, Zhu, Zhou (b12) 2020; 33
Song, Wu, Song, Zhang, Stojanovic (b9) 2023; 550
Tumer, Ghosh (b40) 1996; 8
Windeatt, Zor (b41) 2011; 22
Krizhevsky (b44) 2023
Biggio (10.1016/j.neucom.2025.129364_b30) 2013
Nicolae (10.1016/j.neucom.2025.129364_b43) 2023
Tramer (10.1016/j.neucom.2025.129364_b17) 2020; 33
Windeatt (10.1016/j.neucom.2025.129364_b41) 2011; 22
Fawzi (10.1016/j.neucom.2025.129364_b38) 2016; 29
Windeatt (10.1016/j.neucom.2025.129364_b20) 2006; 17
Chivukula (10.1016/j.neucom.2025.129364_b12) 2020; 33
Addesso (10.1016/j.neucom.2025.129364_b15) 2021; 16
Zhang (10.1016/j.neucom.2025.129364_b36) 2023; 111
10.1016/j.neucom.2025.129364_b21
Madry (10.1016/j.neucom.2025.129364_b46) 2019
Mijwel (10.1016/j.neucom.2025.129364_b7) 2023
10.1016/j.neucom.2025.129364_b47
Tao (10.1016/j.neucom.2025.129364_b11) 2023; 35
Lian (10.1016/j.neucom.2025.129364_b16) 2023; 10
Schmidt (10.1016/j.neucom.2025.129364_b37) 2018; 31
He (10.1016/j.neucom.2025.129364_b3) 2023; 25
Krizhevsky (10.1016/j.neucom.2025.129364_b56) 2012; 25
Han (10.1016/j.neucom.2025.129364_b2) 2023; 55
Tikhonov (10.1016/j.neucom.2025.129364_b26) 1977
Kwon (10.1016/j.neucom.2025.129364_b51) 2021; 80
Zhang (10.1016/j.neucom.2025.129364_b34) 2021
Sen (10.1016/j.neucom.2025.129364_b53) 2020
Crecchi (10.1016/j.neucom.2025.129364_b39) 2022; 470
Windeatt (10.1016/j.neucom.2025.129364_b42) 2013; 24
Guo (10.1016/j.neucom.2025.129364_b23) 2019; 501
Craighero (10.1016/j.neucom.2025.129364_b1) 2023; 554
Tumer (10.1016/j.neucom.2025.129364_b40) 1996; 8
Kurakin (10.1016/j.neucom.2025.129364_b48) 2018
Aldahdooh (10.1016/j.neucom.2025.129364_b50) 2022; 55
LeCun (10.1016/j.neucom.2025.129364_b55) 2023
Ortiz-Jiménez (10.1016/j.neucom.2025.129364_b5) 2021; 109
Wood (10.1016/j.neucom.2025.129364_b19) 2023; 24
Goodfellow (10.1016/j.neucom.2025.129364_b32) 2015
Zhang (10.1016/j.neucom.2025.129364_b14) 2023; 18
Tou (10.1016/j.neucom.2025.129364_b25) 1974
Song (10.1016/j.neucom.2025.129364_b8) 2023; 55
10.1016/j.neucom.2025.129364_b45
Lin (10.1016/j.neucom.2025.129364_b31) 2021; 33
Song (10.1016/j.neucom.2025.129364_b9) 2023; 550
Windeatt (10.1016/j.neucom.2025.129364_b18) 2008
Qamar (10.1016/j.neucom.2025.129364_b6) 2023
Szűcs (10.1016/j.neucom.2025.129364_b24) 2023; 82
Ilyas (10.1016/j.neucom.2025.129364_b35) 2019; 32
Pang (10.1016/j.neucom.2025.129364_b52) 2019
Windeatt (10.1016/j.neucom.2025.129364_b28) 2018; 30
Addesso (10.1016/j.neucom.2025.129364_b13) 2020; 15
Hurst (10.1016/j.neucom.2025.129364_b27) 1985
Serban (10.1016/j.neucom.2025.129364_b49) 2020; 53
Szegedy (10.1016/j.neucom.2025.129364_b29) 2014
10.1016/j.neucom.2025.129364_b33
Song (10.1016/j.neucom.2025.129364_b10) 2023; 111
Nguyen (10.1016/j.neucom.2025.129364_b4) 2023
Deng (10.1016/j.neucom.2025.129364_b22) 2024; 36
Krizhevsky (10.1016/j.neucom.2025.129364_b44) 2023
Verma (10.1016/j.neucom.2025.129364_b54) 2019; 32
References_xml – start-page: 42
  year: 2023
  end-page: 45
  ident: b7
  article-title: Overview of neural networks
  publication-title: Babylon. J. Mach. Learn.
– volume: 32
  year: 2019
  ident: b35
  article-title: Adversarial examples are not bugs, they are features
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 111
  start-page: 12181
  year: 2023
  end-page: 12196
  ident: b10
  article-title: Finite-time adaptive neural resilient DSC for fractional-order nonlinear large-scale systems against sensor-actuator faults
  publication-title: Nonlinear Dyn.
– volume: 36
  year: 2024
  ident: b22
  article-title: Understanding and improving ensemble adversarial defense
  publication-title: Adv. Neural Inf. Process. Syst.
– year: 2015
  ident: b32
  article-title: Explaining and harnessing adversarial examples
– volume: 31
  year: 2018
  ident: b37
  article-title: Adversarially robust generalization requires more data
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 55
  start-page: 1
  year: 2023
  end-page: 38
  ident: b2
  article-title: Interpreting adversarial examples in deep learning: A review
  publication-title: ACM Comput. Surv.
– volume: 32
  year: 2019
  ident: b54
  article-title: Error correcting output codes improve probability estimation and adversarial robustness of deep neural networks
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 470
  start-page: 257
  year: 2022
  end-page: 268
  ident: b39
  article-title: Fader: Fast adversarial example rejection
  publication-title: Neurocomputing
– year: 2023
  ident: b4
  article-title: Physical adversarial attacks for surveillance: A survey
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
– volume: 30
  start-page: 1272
  year: 2018
  end-page: 1277
  ident: b28
  article-title: Approximation of ensemble boundary using spectral coefficients
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
– year: 2020
  ident: b53
  article-title: Ensembles of mixed precision deep networks for increased robustness against adversarial attacks
– start-page: 133
  year: 2008
  end-page: 147
  ident: b18
  article-title: Ensemble MLP classifier design
  publication-title: Computational Intelligence Paradigms: Innovative Applications
– reference: A. Shafahi, W.R. Huang, C. Studer, S. Feizi, T. Goldstein, Are adversarial examples inevitable?,
– volume: 109
  start-page: 635
  year: 2021
  end-page: 659
  ident: b5
  article-title: Optimism in the face of adversity: Understanding and improving deep learning through adversarial robustness
  publication-title: Proc. IEEE
– volume: 10
  start-page: 2086
  year: 2023
  end-page: 2097
  ident: b16
  article-title: A Stackelberg game approach to the stability of networked switched systems under DoS attacks
  publication-title: IEEE Trans. Netw. Sci. Eng.
– year: 2023
  ident: b44
  article-title: CIFAR-10 dataset
– year: 1977
  ident: b26
  article-title: Solutions of Ill-Posed Problems
– volume: 55
  start-page: 8997
  year: 2023
  end-page: 9018
  ident: b8
  article-title: Switching-like event-triggered state estimation for reaction–diffusion neural networks against DoS attacks
  publication-title: Neural Process. Lett.
– start-page: 124
  year: 2023
  end-page: 133
  ident: b6
  article-title: Artificial neural networks: An overview
  publication-title: Mesop. J. Comput. Sci.
– volume: 29
  year: 2016
  ident: b38
  article-title: Robustness of classifiers: from adversarial to random noise
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 22
  start-page: 1334
  year: 2011
  end-page: 1339
  ident: b41
  article-title: Minimising added classification error using Walsh coefficients
  publication-title: IEEE Trans. Neural Netw.
– year: 2019
  ident: b46
  article-title: Towards deep learning models resistant to adversarial attacks
  publication-title: Towards Deep Learning Models Resistant to Adversarial Attacks
– volume: 33
  start-page: 1633
  year: 2020
  end-page: 1645
  ident: b17
  article-title: On adaptive attacks to adversarial example defenses
  publication-title: Adv. Neural Inf. Process. Syst.
– start-page: 4970
  year: 2019
  end-page: 4979
  ident: b52
  article-title: Improving adversarial robustness via promoting ensemble diversity
  publication-title: International Conference on Machine Learning
– year: 2023
  ident: b55
  article-title: MNIST dataset
– volume: 18
  start-page: 1349
  year: 2023
  end-page: 1364
  ident: b14
  article-title: A game-theoretic method for defending against advanced persistent threats in cyber systems
  publication-title: IEEE Trans. Inf. Forensics Secur.
– volume: 16
  start-page: 3604
  year: 2021
  end-page: 3619
  ident: b15
  article-title: Adversarial Kendall’s model towards containment of distributed cyber-threats
  publication-title: IEEE Trans. Inf. Forensics Secur.
– reference: S.M. Moosavi-Dezfooli, A. Fawzi, P. Frossard, Deepfool: a simple and accurate method to fool deep neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2574–2582.
– volume: 17
  start-page: 1194
  year: 2006
  end-page: 1211
  ident: b20
  article-title: Accuracy/diversity and ensemble MLP classifier design
  publication-title: IEEE Trans. Neural Netw.
– volume: 550
  year: 2023
  ident: b9
  article-title: Bipartite synchronization for cooperative-competitive neural networks with reaction–diffusion terms via dual event-triggered mechanism
  publication-title: Neurocomputing
– year: 2023
  ident: b43
  article-title: Adversarial robustness toolbox
– start-page: 151
  year: 1974
  end-page: 154
  ident: b25
  article-title: Pattern Recognition Principles
– volume: 111
  start-page: 185
  year: 2023
  end-page: 215
  ident: b36
  article-title: A survey on learning to reject
  publication-title: Proc. IEEE
– reference: S.M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, P. Frossard, Universal adversarial perturbations, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1765–1773.
– year: 2014
  ident: b29
  article-title: Intriguing properties of neural networks
– volume: 33
  start-page: 3568
  year: 2020
  end-page: 3581
  ident: b12
  article-title: Game theoretical adversarial deep learning with variational adversaries
  publication-title: IEEE Trans. Knowl. Data Eng.
– volume: 8
  start-page: 385
  year: 1996
  end-page: 404
  ident: b40
  article-title: Error correlation and error reduction in ensemble classifiers
  publication-title: Connect. Sci.
– volume: 33
  start-page: 7888
  year: 2021
  end-page: 7898
  ident: b31
  article-title: Supervised learning in neural networks: Feedback-network-free implementation and biological plausibility
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
– year: 2021
  ident: b34
  article-title: A survey on universal adversarial attack
– volume: 25
  year: 2012
  ident: b56
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 55
  start-page: 4403
  year: 2022
  end-page: 4462
  ident: b50
  article-title: Adversarial example detection for DNN models: A review and experimental comparison
  publication-title: Artif. Intell. Rev.
– year: 1985
  ident: b27
  article-title: Spectral Techniques in Digital Logic
– volume: 501
  start-page: 182
  year: 2019
  end-page: 192
  ident: b23
  article-title: Detecting adversarial examples via prediction difference for deep neural networks
  publication-title: Inform. Sci.
– volume: 24
  start-page: 673
  year: 2013
  end-page: 678
  ident: b42
  article-title: Ensemble pruning using spectral coefficients
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
– volume: 24
  start-page: 1
  year: 2023
  end-page: 49
  ident: b19
  article-title: A unified theory of diversity in ensemble learning
  publication-title: J. Mach. Learn. Res.
– start-page: 387
  year: 2013
  end-page: 402
  ident: b30
  article-title: Evasion attacks against machine learning at test time
  publication-title: Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13
– volume: 35
  year: 2023
  ident: b11
  article-title: Planetary gearbox fault diagnosis based on FDKNN-DGAT with few labeled data
  publication-title: Meas. Sci. Technol.
– volume: 80
  start-page: 10339
  year: 2021
  end-page: 10360
  ident: b51
  article-title: Classification score approach for detecting adversarial example in deep neural network
  publication-title: Multimedia Tools Appl.
– reference: .
– reference: W. He, J. Wei, X. Chen, N. Carlini, D. Song, Adversarial example defense: Ensembles of weak defenses are not strong, in: 11th USENIX Workshop on Offensive Technologies, WOOT 17, 2017.
– volume: 82
  start-page: 16717
  year: 2023
  end-page: 16740
  ident: b24
  article-title: 2N labeling defense method against adversarial attacks by filtering and extended class label set
  publication-title: Multimedia Tools Appl.
– volume: 25
  start-page: 538
  year: 2023
  end-page: 566
  ident: b3
  article-title: Adversarial machine learning for network intrusion detection systems: A comprehensive survey
  publication-title: IEEE Commun. Surv. Tutor.
– volume: 554
  year: 2023
  ident: b1
  article-title: Unity is strength: Improving the detection of adversarial examples with ensemble approaches
  publication-title: Neurocomputing
– volume: 53
  start-page: 1
  year: 2020
  end-page: 38
  ident: b49
  article-title: Adversarial examples on object recognition: A comprehensive survey
  publication-title: ACM Comput. Surv.
– volume: 15
  start-page: 943
  year: 2020
  end-page: 958
  ident: b13
  article-title: ADVoIP: Adversarial detection of encrypted and concealed VoIP
  publication-title: IEEE Trans. Inf. Forensics Secur.
– start-page: 99
  year: 2018
  end-page: 112
  ident: b48
  article-title: Adversarial examples in the physical world
  publication-title: Artificial Intelligence Safety and Security
– volume: 470
  start-page: 257
  year: 2022
  ident: 10.1016/j.neucom.2025.129364_b39
  article-title: Fader: Fast adversarial example rejection
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2021.10.082
– year: 2015
  ident: 10.1016/j.neucom.2025.129364_b32
– volume: 31
  year: 2018
  ident: 10.1016/j.neucom.2025.129364_b37
  article-title: Adversarially robust generalization requires more data
  publication-title: Adv. Neural Inf. Process. Syst.
– year: 2023
  ident: 10.1016/j.neucom.2025.129364_b44
– volume: 111
  start-page: 12181
  issue: 13
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b10
  article-title: Finite-time adaptive neural resilient DSC for fractional-order nonlinear large-scale systems against sensor-actuator faults
  publication-title: Nonlinear Dyn.
  doi: 10.1007/s11071-023-08456-0
– start-page: 124
  issue: 2023
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b6
  article-title: Artificial neural networks: An overview
  publication-title: Mesop. J. Comput. Sci.
– volume: 36
  year: 2024
  ident: 10.1016/j.neucom.2025.129364_b22
  article-title: Understanding and improving ensemble adversarial defense
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 30
  start-page: 1272
  issue: 4
  year: 2018
  ident: 10.1016/j.neucom.2025.129364_b28
  article-title: Approximation of ensemble boundary using spectral coefficients
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
  doi: 10.1109/TNNLS.2018.2861579
– volume: 111
  start-page: 185
  issue: 2
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b36
  article-title: A survey on learning to reject
  publication-title: Proc. IEEE
  doi: 10.1109/JPROC.2023.3238024
– volume: 24
  start-page: 673
  issue: 4
  year: 2013
  ident: 10.1016/j.neucom.2025.129364_b42
  article-title: Ensemble pruning using spectral coefficients
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
  doi: 10.1109/TNNLS.2013.2239659
– volume: 109
  start-page: 635
  issue: 5
  year: 2021
  ident: 10.1016/j.neucom.2025.129364_b5
  article-title: Optimism in the face of adversity: Understanding and improving deep learning through adversarial robustness
  publication-title: Proc. IEEE
  doi: 10.1109/JPROC.2021.3050042
– volume: 35
  issue: 2
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b11
  article-title: Planetary gearbox fault diagnosis based on FDKNN-DGAT with few labeled data
  publication-title: Meas. Sci. Technol.
  doi: 10.1088/1361-6501/ad0f6d
– volume: 82
  start-page: 16717
  issue: 11
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b24
  article-title: 2N labeling defense method against adversarial attacks by filtering and extended class label set
  publication-title: Multimedia Tools Appl.
  doi: 10.1007/s11042-022-14021-5
– year: 1985
  ident: 10.1016/j.neucom.2025.129364_b27
– volume: 33
  start-page: 7888
  issue: 12
  year: 2021
  ident: 10.1016/j.neucom.2025.129364_b31
  article-title: Supervised learning in neural networks: Feedback-network-free implementation and biological plausibility
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
  doi: 10.1109/TNNLS.2021.3089134
– year: 2020
  ident: 10.1016/j.neucom.2025.129364_b53
– volume: 33
  start-page: 1633
  year: 2020
  ident: 10.1016/j.neucom.2025.129364_b17
  article-title: On adaptive attacks to adversarial example defenses
  publication-title: Adv. Neural Inf. Process. Syst.
– start-page: 42
  issue: 2023
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b7
  article-title: Overview of neural networks
  publication-title: Babylon. J. Mach. Learn.
  doi: 10.58496/BJML/2023/008
– volume: 24
  start-page: 1
  issue: 359
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b19
  article-title: A unified theory of diversity in ensemble learning
  publication-title: J. Mach. Learn. Res.
– volume: 32
  year: 2019
  ident: 10.1016/j.neucom.2025.129364_b35
  article-title: Adversarial examples are not bugs, they are features
  publication-title: Adv. Neural Inf. Process. Syst.
– year: 2014
  ident: 10.1016/j.neucom.2025.129364_b29
– volume: 25
  start-page: 538
  issue: 1
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b3
  article-title: Adversarial machine learning for network intrusion detection systems: A comprehensive survey
  publication-title: IEEE Commun. Surv. Tutor.
  doi: 10.1109/COMST.2022.3233793
– volume: 80
  start-page: 10339
  year: 2021
  ident: 10.1016/j.neucom.2025.129364_b51
  article-title: Classification score approach for detecting adversarial example in deep neural network
  publication-title: Multimedia Tools Appl.
  doi: 10.1007/s11042-020-09167-z
– volume: 22
  start-page: 1334
  issue: 8
  year: 2011
  ident: 10.1016/j.neucom.2025.129364_b41
  article-title: Minimising added classification error using Walsh coefficients
  publication-title: IEEE Trans. Neural Netw.
  doi: 10.1109/TNN.2011.2159513
– start-page: 133
  year: 2008
  ident: 10.1016/j.neucom.2025.129364_b18
  article-title: Ensemble MLP classifier design
– volume: 55
  start-page: 4403
  issue: 6
  year: 2022
  ident: 10.1016/j.neucom.2025.129364_b50
  article-title: Adversarial example detection for DNN models: A review and experimental comparison
  publication-title: Artif. Intell. Rev.
  doi: 10.1007/s10462-021-10125-w
– start-page: 387
  year: 2013
  ident: 10.1016/j.neucom.2025.129364_b30
  article-title: Evasion attacks against machine learning at test time
– year: 1977
  ident: 10.1016/j.neucom.2025.129364_b26
– ident: 10.1016/j.neucom.2025.129364_b33
– ident: 10.1016/j.neucom.2025.129364_b47
  doi: 10.1109/CVPR.2017.17
– volume: 10
  start-page: 2086
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b16
  article-title: A Stackelberg game approach to the stability of networked switched systems under DoS attacks
  publication-title: IEEE Trans. Netw. Sci. Eng.
  doi: 10.1109/TNSE.2023.3240687
– volume: 15
  start-page: 943
  year: 2020
  ident: 10.1016/j.neucom.2025.129364_b13
  article-title: ADVoIP: Adversarial detection of encrypted and concealed VoIP
  publication-title: IEEE Trans. Inf. Forensics Secur.
  doi: 10.1109/TIFS.2019.2922398
– volume: 55
  start-page: 1
  issue: 14s
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b2
  article-title: Interpreting adversarial examples in deep learning: A review
  publication-title: ACM Comput. Surv.
  doi: 10.1145/3594869
– volume: 53
  start-page: 1
  issue: 3
  year: 2020
  ident: 10.1016/j.neucom.2025.129364_b49
  article-title: Adversarial examples on object recognition: A comprehensive survey
  publication-title: ACM Comput. Surv.
  doi: 10.1145/3398394
– volume: 32
  year: 2019
  ident: 10.1016/j.neucom.2025.129364_b54
  article-title: Error correcting output codes improve probability estimation and adversarial robustness of deep neural networks
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 501
  start-page: 182
  year: 2019
  ident: 10.1016/j.neucom.2025.129364_b23
  article-title: Detecting adversarial examples via prediction difference for deep neural networks
  publication-title: Inform. Sci.
  doi: 10.1016/j.ins.2019.05.084
– volume: 29
  year: 2016
  ident: 10.1016/j.neucom.2025.129364_b38
  article-title: Robustness of classifiers: from adversarial to random noise
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 550
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b9
  article-title: Bipartite synchronization for cooperative-competitive neural networks with reaction–diffusion terms via dual event-triggered mechanism
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2023.126498
– year: 2023
  ident: 10.1016/j.neucom.2025.129364_b4
  article-title: Physical adversarial attacks for surveillance: A survey
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
– volume: 55
  start-page: 8997
  issue: 7
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b8
  article-title: Switching-like event-triggered state estimation for reaction–diffusion neural networks against DoS attacks
  publication-title: Neural Process. Lett.
  doi: 10.1007/s11063-023-11189-1
– year: 2021
  ident: 10.1016/j.neucom.2025.129364_b34
– year: 2019
  ident: 10.1016/j.neucom.2025.129364_b46
  article-title: Towards deep learning models resistant to adversarial attacks
– volume: 17
  start-page: 1194
  issue: 5
  year: 2006
  ident: 10.1016/j.neucom.2025.129364_b20
  article-title: Accuracy/diversity and ensemble MLP classifier design
  publication-title: IEEE Trans. Neural Netw.
  doi: 10.1109/TNN.2006.875979
– volume: 33
  start-page: 3568
  issue: 11
  year: 2020
  ident: 10.1016/j.neucom.2025.129364_b12
  article-title: Game theoretical adversarial deep learning with variational adversaries
  publication-title: IEEE Trans. Knowl. Data Eng.
  doi: 10.1109/TKDE.2020.2972320
– ident: 10.1016/j.neucom.2025.129364_b45
  doi: 10.1109/CVPR.2016.282
– ident: 10.1016/j.neucom.2025.129364_b21
– start-page: 151
  year: 1974
  ident: 10.1016/j.neucom.2025.129364_b25
– volume: 554
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b1
  article-title: Unity is strength: Improving the detection of adversarial examples with ensemble approaches
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2023.126576
– volume: 25
  year: 2012
  ident: 10.1016/j.neucom.2025.129364_b56
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Adv. Neural Inf. Process. Syst.
– start-page: 4970
  year: 2019
  ident: 10.1016/j.neucom.2025.129364_b52
  article-title: Improving adversarial robustness via promoting ensemble diversity
– year: 2023
  ident: 10.1016/j.neucom.2025.129364_b43
– start-page: 99
  year: 2018
  ident: 10.1016/j.neucom.2025.129364_b48
  article-title: Adversarial examples in the physical world
– volume: 8
  start-page: 385
  issue: 3–4
  year: 1996
  ident: 10.1016/j.neucom.2025.129364_b40
  article-title: Error correlation and error reduction in ensemble classifiers
  publication-title: Connect. Sci.
  doi: 10.1080/095400996116839
– year: 2023
  ident: 10.1016/j.neucom.2025.129364_b55
– volume: 18
  start-page: 1349
  year: 2023
  ident: 10.1016/j.neucom.2025.129364_b14
  article-title: A game-theoretic method for defending against advanced persistent threats in cyber systems
  publication-title: IEEE Trans. Inf. Forensics Secur.
  doi: 10.1109/TIFS.2022.3229595
– volume: 16
  start-page: 3604
  year: 2021
  ident: 10.1016/j.neucom.2025.129364_b15
  article-title: Adversarial Kendall’s model towards containment of distributed cyber-threats
  publication-title: IEEE Trans. Inf. Forensics Secur.
  doi: 10.1109/TIFS.2021.3082327
SSID ssj0017129
Score 2.437029
Snippet Despite being effective in many application areas, Deep Neural Networks (DNNs) are vulnerable to being attacked. In object recognition, the attack takes the...
SourceID crossref
elsevier
SourceType Index Database
Publisher
StartPage 129364
SubjectTerms Adversarial robustness
Boolean functions
Deep neural networks
Ensemble
Machine learning
Security
Title Adversarial detection by approximation of ensemble boundary
URI https://dx.doi.org/10.1016/j.neucom.2025.129364
Volume 622
WOSCitedRecordID wos001417399000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  issn: 0925-2312
  databaseCode: AIEXJ
  dateStart: 19950101
  customDbUrl:
  isFulltext: true
  dateEnd: 99991231
  titleUrlDefault: https://www.sciencedirect.com
  omitProxy: false
  ssIdentifier: ssj0017129
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3LitswFBVt0kU3fU3LpC-0mK1DLVuWRVehpLSlhEIzTHZGT8hAnJA4JfP3c_Wwk0yG0ha6McbYsrlHFkdXuucgdGEkE_DbqMRJiSR5aVUic6kSVlBuCXWiVx7p72wyKWcz_iM6hm68nQCr63K346v_CjVcA7Bd6exfwN01ChfgHECHI8AOxz8C3lssb4R349CmMcELHFimlw_fzRcdSYQZrFm4yinpvZWOi6O9aofyng8xmzBaOFEF7XpQlz24cmqLIiwwTc06NhGzCIS6bVTpPot4Wt4ScoRwIxDAo-GyCHXEJ0NvyAJcD2uzdftw3EuGjkwEkfI7otY_XdOuZUK9JM6Hh6hPGOVlD_VHX8ezb91KEEtJ0EuMn9KWP_o9eqfvup9eHFCG6TP0JHJ9PAoYPUcPTP0CPW19NHAcVs_QxwPIcAcZljf4CDK8tLiFDLeQvUSXn8fTT1-SaGqRKOj4DTCCnGqqYVorci4KJ5BXKEtzLSinAqbXNJWM6VzZDCazVBWpJmmWFYZboYjNsleoVy9rc46wzgzhpJQ2NUWuS1dkrSRhwpY6K7jUA5S0sahWQbukajf1XVchdpWLXRViN0CsDVgV-VfgVRVg_NsnX__zk2_Q4313fIt6zXpr3qFH6lcz36zfx85wC7e8Vpw
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Adversarial+detection+by+approximation+of+ensemble+boundary&rft.jtitle=Neurocomputing+%28Amsterdam%29&rft.au=Windeatt%2C+Terry&rft.date=2025-03-14&rft.pub=Elsevier+B.V&rft.issn=0925-2312&rft.volume=622&rft_id=info:doi/10.1016%2Fj.neucom.2025.129364&rft.externalDocID=S0925231225000360
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0925-2312&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0925-2312&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0925-2312&client=summon