Trembling triggers: exploring the sensitivity of backdoors in DNN-based face recognition

Backdoor attacks against supervised machine learning methods seek to modify the training samples in such a way that, at inference time, the presence of a specific pattern (trigger) in the input data causes misclassifications to a target class chosen by the adversary. Successful backdoor attacks have...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:EURASIP Journal on Information Security Ročník 2020; číslo 1; s. 1 - 15
Hlavní autori: Pasquini, Cecilia, Böhme, Rainer
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Cham Springer International Publishing 23.06.2020
Springer Nature B.V
SpringerOpen
Predmet:
ISSN:2510-523X, 1687-4161, 2510-523X, 1687-417X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Backdoor attacks against supervised machine learning methods seek to modify the training samples in such a way that, at inference time, the presence of a specific pattern (trigger) in the input data causes misclassifications to a target class chosen by the adversary. Successful backdoor attacks have been presented in particular for face recognition systems based on deep neural networks (DNNs). These attacks were evaluated for identical triggers at training and inference time. However, the vulnerability to backdoor attacks in practice crucially depends on the sensitivity of the backdoored classifier to approximate trigger inputs. To assess this, we study the response of a backdoored DNN for face recognition to trigger signals that have been transformed with typical image processing operators of varying strength. Results for different kinds of geometric and color transformations suggest that in particular geometric misplacements and partial occlusions of the trigger limit the effectiveness of the backdoor attacks considered. Moreover, our analysis reveals that the spatial interaction of the trigger with the subject’s face affects the success of the attack. Experiments with physical triggers inserted in live acquisitions validate the observed response of the DNN when triggers are inserted digitally.
AbstractList Backdoor attacks against supervised machine learning methods seek to modify the training samples in such a way that, at inference time, the presence of a specific pattern (trigger) in the input data causes misclassifications to a target class chosen by the adversary. Successful backdoor attacks have been presented in particular for face recognition systems based on deep neural networks (DNNs). These attacks were evaluated for identical triggers at training and inference time. However, the vulnerability to backdoor attacks in practice crucially depends on the sensitivity of the backdoored classifier to approximate trigger inputs. To assess this, we study the response of a backdoored DNN for face recognition to trigger signals that have been transformed with typical image processing operators of varying strength. Results for different kinds of geometric and color transformations suggest that in particular geometric misplacements and partial occlusions of the trigger limit the effectiveness of the backdoor attacks considered. Moreover, our analysis reveals that the spatial interaction of the trigger with the subject’s face affects the success of the attack. Experiments with physical triggers inserted in live acquisitions validate the observed response of the DNN when triggers are inserted digitally.
Backdoor attacks against supervised machine learning methods seek to modify the training samples in such a way that, at inference time, the presence of a specific pattern (trigger) in the input data causes misclassifications to a target class chosen by the adversary. Successful backdoor attacks have been presented in particular for face recognition systems based on deep neural networks (DNNs). These attacks were evaluated for identical triggers at training and inference time. However, the vulnerability to backdoor attacks in practice crucially depends on the sensitivity of the backdoored classifier to approximate trigger inputs. To assess this, we study the response of a backdoored DNN for face recognition to trigger signals that have been transformed with typical image processing operators of varying strength. Results for different kinds of geometric and color transformations suggest that in particular geometric misplacements and partial occlusions of the trigger limit the effectiveness of the backdoor attacks considered. Moreover, our analysis reveals that the spatial interaction of the trigger with the subject’s face affects the success of the attack. Experiments with physical triggers inserted in live acquisitions validate the observed response of the DNN when triggers are inserted digitally.
Abstract Backdoor attacks against supervised machine learning methods seek to modify the training samples in such a way that, at inference time, the presence of a specific pattern (trigger) in the input data causes misclassifications to a target class chosen by the adversary. Successful backdoor attacks have been presented in particular for face recognition systems based on deep neural networks (DNNs). These attacks were evaluated for identical triggers at training and inference time. However, the vulnerability to backdoor attacks in practice crucially depends on the sensitivity of the backdoored classifier to approximate trigger inputs. To assess this, we study the response of a backdoored DNN for face recognition to trigger signals that have been transformed with typical image processing operators of varying strength. Results for different kinds of geometric and color transformations suggest that in particular geometric misplacements and partial occlusions of the trigger limit the effectiveness of the backdoor attacks considered. Moreover, our analysis reveals that the spatial interaction of the trigger with the subject’s face affects the success of the attack. Experiments with physical triggers inserted in live acquisitions validate the observed response of the DNN when triggers are inserted digitally.
ArticleNumber 12
Author Pasquini, Cecilia
Böhme, Rainer
Author_xml – sequence: 1
  givenname: Cecilia
  surname: Pasquini
  fullname: Pasquini, Cecilia
  email: cecilia.pasquini@unitn.it
  organization: Department of Information Engineering and Computer Science, University of Trento
– sequence: 2
  givenname: Rainer
  surname: Böhme
  fullname: Böhme, Rainer
  organization: Department of Computer Science, University of Innsbruck
BookMark eNp9kc1q3TAQhUVJIWmSF8hK0LXbkWTJcncl_QuEZJNCdmIsjV3dOtat5IQmT1_f65aWLjIbDcP5DkecV-xgShMxdibgjRDWvC1CGaUrkFABCKirpxfsSGoBlZbq9uCf_ZCdlrIBAGnBtqCP2O1NprtujNPA5xyHgXJ5x-nndkx5f_tGvNBU4hwf4vzIU8879N9DSrnwOPEPV1dVh4UC79ETz-TTMC3iNJ2wlz2OhU5_v8fs66ePN-dfqsvrzxfn7y8rX8t2rpRQyrQqmGCBZBuWIduIuvE6eO-XpHUg7MCQ6IVBRKsVadPWPQqPUqtjdrH6hoQbt83xDvOjSxjd_pDy4DDP0Y_kRNeaBiCYHd-I1grQwjYSLCqlOrt4vV69tjn9uKcyu026z9MS38laGKhBt82isqvK51RKpt75OOPuz3PGODoBbleLW2txSy1uX4t7WlD5H_on8LOQWqGy3XVC-W-qZ6hfYXGhOw
CitedBy_id crossref_primary_10_1007_s11042_023_15278_0
crossref_primary_10_1109_ACCESS_2024_3382584
crossref_primary_10_1145_3704725
Cites_doi 10.1109/ICIP.2019.8802997
10.1109/mmsp.2019.8901711
10.1109/cvpr.2018.00175
10.1016/S0165-1684(03)00168-3
10.1145/2909827.2930787
10.14722/ndss.2018.23291
10.1109/TIFS.2016.2530636
10.1109/ACCESS.2019.2909068
10.1145/2976749.2978392
10.1109/TIFS.2014.2359646
10.1145/3128572.3140451
10.1109/TKDE.2013.57
10.1145/1128817.1128824
10.1109/cvpr.2014.220
10.7551/mitpress/11474.001.0001
10.1007/s10994-010-5188-5
10.1109/TIFS.2018.2889259
10.1109/sp.2018.00057
10.1109/SP.2019.00031
10.1109/5.726791
10.1007/978-3-030-00470-5_13
10.1145/3359789.3359790
10.1109/icip.2018.8451698
10.1109/TIFS.2017.2699638
10.5244/C.29.41
ContentType Journal Article
Copyright The Author(s) 2020
The Author(s) 2020. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: The Author(s) 2020
– notice: The Author(s) 2020. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID C6C
AAYXX
CITATION
3V.
7SC
7XB
8AL
8FD
8FE
8FG
8FK
ABUWG
AFKRA
ARAPS
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
GNUQQ
HCIFZ
JQ2
K7-
L7M
L~C
L~D
M0N
P5Z
P62
PHGZM
PHGZT
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
Q9U
DOA
DOI 10.1186/s13635-020-00104-z
DatabaseName Springer Nature OA Free Journals
CrossRef
ProQuest Central (Corporate)
Computer and Information Systems Abstracts
ProQuest Central (purchase pre-March 2016)
Computing Database (Alumni Edition)
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials - QC
ProQuest Central
ProQuest Technology Collection
ProQuest One
ProQuest Central
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
Computer Science Database
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Computing Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
ProQuest Central Basic
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
Computer Science Database
ProQuest Central Student
Technology Collection
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Central China
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest Central Korea
ProQuest Central (New)
Advanced Technologies Database with Aerospace
Advanced Technologies & Aerospace Collection
ProQuest Computing
ProQuest Central Basic
ProQuest Computing (Alumni Edition)
ProQuest One Academic Eastern Edition
ProQuest Technology Collection
ProQuest SciTech Collection
Computer and Information Systems Abstracts Professional
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest Central (Alumni)
ProQuest One Academic (New)
DatabaseTitleList
Computer Science Database

CrossRef
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 2510-523X
1687-417X
EndPage 15
ExternalDocumentID oai_doaj_org_article_1b96700d656947198105187208a333b8
10_1186_s13635_020_00104_z
GroupedDBID -A0
.4S
.DC
2WC
3V.
4.4
40G
5VS
6KP
8FE
8FG
8R4
8R5
AAKPC
ABUWG
ACGFS
ADBBV
ADINQ
ADMLS
AFKRA
AHBYD
AHYZX
ALMA_UNASSIGNED_HOLDINGS
AMKLP
ARAPS
ARCSS
AZQEC
BCNDV
BENPR
BGLVJ
BPHCQ
C24
C6C
CCPQU
CS3
DWQXO
EDO
EIS
GNUQQ
GROUPED_DOAJ
HCIFZ
HZ~
K6V
K7-
KQ8
M0N
M~E
OK1
P62
PQQKQ
PROAC
Q2X
RHU
SEG
TR2
TUS
U2A
AAYXX
CITATION
OVT
7SC
7XB
8AL
8FD
8FK
JQ2
L7M
L~C
L~D
PHGZM
PHGZT
PKEHL
PQEST
PQGLB
PQUKI
PRINS
Q9U
ID FETCH-LOGICAL-c429t-3133693d6d80e29dddde87147c5dccc0024deab06e1f16aaa853e5694fa1ca253
IEDL.DBID BENPR
ISICitedReferencesCount 11
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000542620900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2510-523X
1687-4161
IngestDate Fri Oct 03 12:40:20 EDT 2025
Sat Oct 11 05:45:40 EDT 2025
Tue Nov 18 21:50:34 EST 2025
Sat Nov 29 03:33:01 EST 2025
Fri Feb 21 02:32:16 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords Neural networks
Adversarial machine learning
Backdoor attacks
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c429t-3133693d6d80e29dddde87147c5dccc0024deab06e1f16aaa853e5694fa1ca253
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
OpenAccessLink https://doaj.org/article/1b96700d656947198105187208a333b8
PQID 2416040597
PQPubID 237294
PageCount 15
ParticipantIDs doaj_primary_oai_doaj_org_article_1b96700d656947198105187208a333b8
proquest_journals_2416040597
crossref_citationtrail_10_1186_s13635_020_00104_z
crossref_primary_10_1186_s13635_020_00104_z
springer_journals_10_1186_s13635_020_00104_z
PublicationCentury 2000
PublicationDate 2020-06-23
PublicationDateYYYYMMDD 2020-06-23
PublicationDate_xml – month: 06
  year: 2020
  text: 2020-06-23
  day: 23
PublicationDecade 2020
PublicationPlace Cham
PublicationPlace_xml – name: Cham
– name: New York
PublicationTitle EURASIP Journal on Information Security
PublicationTitleAbbrev EURASIP J. on Info. Security
PublicationYear 2020
Publisher Springer International Publishing
Springer Nature B.V
SpringerOpen
Publisher_xml – name: Springer International Publishing
– name: Springer Nature B.V
– name: SpringerOpen
References M. Barni, A. Costanzo, E. Nowroozi, B. Tondi, in 2018 25th IEEE International Conference on Image Processing (ICIP). CNN-based detection of generic contrast adjustment with jpeg post-processing (IEEE, 2018). https://doi.org/10.1109/icip.2018.8451698.
Y. Taigman, M. Yang, M. Ranzato, L. Wolf, in 2014 IEEE Conference on Computer Vision and Pattern Recognition. Deepface: Closing the gap to human-level performance in face verification (IEEE, 2014). https://doi.org/10.1109/cvpr.2014.220.
A. Bhalerao, K. Kallas, B. Tondi, M. Barni, in 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP). Luminance-based video backdoor attack against anti-spoofing rebroadcast detection (IEEE, 2019). https://doi.org/10.1109/mmsp.2019.8901711.
R. Böhme, M. Kirchner, ed. by S. Katzenbeisser, F. Petitcolas. Information hiding (Artech HouseNorwood, 2016), pp. 231–259.
M. Barni, K. Kallas, B. Tondi, A new backdoor attack in CNNs by training set corruption without label poisoning. CoRR. abs/1902.11237: (2019). http://arxiv.org/abs/1902.11237.
EidingerE.EnbarR.HassnerT.Age and gender estimation of unfiltered facesIEEE Transactions on Information Forensics and Security20149122170217910.1109/TIFS.2014.2359646
T. Gu, B. Dolan-Gavitt, S. Garg, in Machine Learning and Computer Security (MLSec) NIPS Workshop. Badnets: identifying vulnerabilities in the machine learning model supply chain, (2017).
B. Chen, W. Carvalho, N. Baracaldo, H. Ludwig, B. Edwards, T. Lee, I. Molloy, B. Srivastava, in Artificial Intelligence Safety Workshop @ AAAI. Detecting backdoor attacks on deep neural networks by activation clustering, (2019).
PasquiniC.BöhmeR.Information-theoretic bounds for the forensic detection of downscaled signalsIEEE Trans. Inf. Forens. Secur.20191471928194310.1109/TIFS.2018.2889259
M. Barreno, B. Nelson, R. Sears, A. D. Joseph, J. D. Tygar, in Proceedings of the 2006 ACM Symposium on Information, computer and communications security - ASIACCS ’06. Can machine learning be secure? (ACM Press, 2006). https://doi.org/10.1145/1128817.1128824.
Y. Liu, S. Ma, Y. Aafer, W. -C. Lee, J. Zhai, W. Wang, X. Zhang, in Proceedings 2018 Network and Distributed System Security Symposium. Trojaning attack on neural networks (Internet Society, 2018). https://doi.org/10.14722/ndss.2018.23291.
B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, B. Y. Zhao, in 2019 IEEE Symposium on Security and Privacy (SP). Neural cleanse: identifying and mitigating backdoor attacks in neural networks, (2019), pp. 707–723. https://doi.org/10.1109/SP.2019.00031.
SharifM.BhagavatulaS.BauerL.ReiterM. K.Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognitionProceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’162016New YorkACM15281540https://doi.org/10.1145/2976749.2978392
Y. Yao, H. Li, H. Zheng, B. Y. Zhao, Regula sub-rosa: latent backdoor attacks on deep neural networks. CoRR. abs/1905.10447: (2019). http://arxiv.org/abs/1905.10447.
C. Liao, H. Zhong, A. C. Squicciarini, S. Zhu, D. J. Miller, Backdoor embedding in convolutional neural network models via invisible perturbation. CoRR. abs/1808.10307: (2018). http://arxiv.org/abs/1808.10307.
TrojanNN. https://github.com/PurduePAML/TrojanNN. Accessed 2 June 2019.
L. Muñoz-González, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. C. Lupu, F. Roli, in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security - AISec ’17. Towards poisoning of deep learning algorithms with back-gradient optimization (ACM Press, 2017). https://doi.org/10.1145/3128572.3140451.
PetitcolasF. A. P.AndersonR. J.KuhnM. G.AucsmithD.Attacks on copyright marking systemsInformation Hiding (2nd International Workshop), LNCS 15251998Berlin HeidelbergSpringer219239
K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, D. Song, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Robust physical-world attacks on deep learning visual classification (IEEE, 2018). https://doi.org/10.1109/cvpr.2018.00175.
O. M. Parkhi, A. Vedaldi, A. Zisserman, in British Machine Vision Conference. Deep face recognition, (2015).
Vàzquez-PadìnD.Pèrez-GonzàlezF.Comesaña-AlfaroP.A random matrix approach to the forensic analysis of upscaled imagesIEEE Trans. Inf. Forens. Secur.20171292115213010.1109/TIFS.2017.2699638
A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, T. Goldstein, in Conference on Neural Information Processing Systems (NIPS). Poison frogs! targeted clean-label poisoning attacks on neural networks, (2018).
K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, F. Tramèr, A. Prakash, T. Kohno, D. Song, in Proceedings of the 12th USENIX Conference on Offensive Technologies, WOOT’18. Physical adversarial examples for object detectors, (2018), p. 1.
LecunY.BottouL.BengioY.HaffnerP.Gradient-based learning applied to document recognitionProc. IEEE199886112278232410.1109/5.726791
E. Chou, F. Tramèr, G. Pellegrino, D. Boneh, Sentinet: Detecting physical attacks against deep learning systems. CoRR. abs/1812.00292: (2018). http://arxiv.org/abs/1812.00292.
T. J. Sejnowski, The deep learning revolution (MIT Press, Cambridge, Massachusetts, 2018).
GuT.LiuK.Dolan-GavittB.GargS.Badnets: evaluating backdooring attacks on deep neural networksIEEE Access20197472304724410.1109/ACCESS.2019.2909068https://doi.org/10.1109/ACCESS.2019.2909068
X. Chen, C. Liu, B. Li, K. Lu, D. Song, Targeted backdoor attacks on deep learning systems using data poisoning. CoRR. abs/1712.05526: (2017). arXIv.
VGG Face Dataset. http://www.robots.ox.ac.uk/\texttildelowvgg/data/vgg_face/. Accessed 2 June 2019.
BiggioB.FumeraG.RoliF.Security evaluation of pattern classifiers under attackIEEE Trans. Knowl. Data Eng.201426498499610.1109/TKDE.2013.57
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, R. Fergus, Intriguing properties of neural networks. CoRR. abs/1312.6199: (2013). arXIv.
K. Liu, B. Dolan-Gavitt, S. Garg, in Research in Attacks, Intrusions, and Defenses. Fine-pruning: defending against backdooring attacks on deep neural networks (Springer, 2018), pp. 273–294. https://doi.org/10.1007/978-3-030-00470-5_13.
M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, B. Li, in 2018 IEEE Symposium on Security and Privacy (SP). Manipulating machine learning: poisoning attacks and countermeasures for regression learning (IEEE, 2018). https://doi.org/10.1109/sp.2018.00057.
B. Tran, J. Li, A. Madry, in Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18. Spectral signatures in backdoor attacks, (2018), pp. 8011–8021.
BarrenoM.NelsonB.JosephA. D.TygarJ. D.The security of machine learningMach. Learn.2010812121148310817710.1007/s10994-010-5188-5
PasquiniC.BoatoG.AlajlanN.De NataleF. G. B.A deterministic approach to detect median filtering in 1D dataIEEE Trans. Inf. Forens. Secur.20161171425143710.1109/TIFS.2016.2530636
T. B. Brown, D. Mané, A. Roy, M. Abadi, J. Gilmer, Adversarial patch. CoRR. abs/1712.09665: (2017). http://arxiv.org/abs/1712.09665.
BarniM.BartoliniF.FuronT.A general framework for robust watermarking securitySig. Process.200383102069208410.1016/S0165-1684(03)00168-3
C. Pasquini, P. Schöttle, R. Böhme, G. Boato, F. Pèrez-Gonzàlez, in Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security - IH&MMSec ’16. Forensics of high quality and nearly identical jpeg image recompression (ACM Press, 2016). https://doi.org/10.1145/2909827.2930787.
W. Guo, L. Wang, X. Xing, M. Du, D. Song, Tabor: A highly accurate approach to inspecting and restoring trojan backdoors in ai systems. ArXiv abs/1908.01763 (2019).
Y. Gao, C. Xu, D. Wang, S. Chen, D. C. Ranasinghe, S. Nepal, STRIP: a defence against trojan attacks on deep neural networks. CoRR. abs/1902.06531: (2019). http://arxiv.org/abs/1902.06531.
104_CR13
104_CR35
104_CR12
104_CR34
104_CR10
104_CR17
104_CR16
104_CR38
104_CR37
104_CR36
M. Sharif (104_CR11) 2016
M. Barni (104_CR28) 2003; 83
104_CR31
104_CR30
M. Barreno (104_CR5) 2010; 81
Y. Lecun (104_CR15) 1998; 86
B. Biggio (104_CR3) 2014; 26
104_CR4
104_CR1
D. Vàzquez-Padìn (104_CR32) 2017; 12
104_CR2
104_CR29
104_CR7
104_CR8
F. A. P. Petitcolas (104_CR27) 1998
C. Pasquini (104_CR33) 2019; 14
104_CR6
104_CR24
104_CR23
104_CR9
104_CR22
104_CR21
104_CR26
104_CR25
104_CR20
E. Eidinger (104_CR39) 2014; 9
104_CR40
C. Pasquini (104_CR41) 2016; 11
104_CR19
T. Gu (104_CR14) 2019; 7
104_CR18
References_xml – reference: BarniM.BartoliniF.FuronT.A general framework for robust watermarking securitySig. Process.200383102069208410.1016/S0165-1684(03)00168-3
– reference: TrojanNN. https://github.com/PurduePAML/TrojanNN. Accessed 2 June 2019.
– reference: GuT.LiuK.Dolan-GavittB.GargS.Badnets: evaluating backdooring attacks on deep neural networksIEEE Access20197472304724410.1109/ACCESS.2019.2909068https://doi.org/10.1109/ACCESS.2019.2909068
– reference: R. Böhme, M. Kirchner, ed. by S. Katzenbeisser, F. Petitcolas. Information hiding (Artech HouseNorwood, 2016), pp. 231–259.
– reference: PetitcolasF. A. P.AndersonR. J.KuhnM. G.AucsmithD.Attacks on copyright marking systemsInformation Hiding (2nd International Workshop), LNCS 15251998Berlin HeidelbergSpringer219239
– reference: T. J. Sejnowski, The deep learning revolution (MIT Press, Cambridge, Massachusetts, 2018).
– reference: C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, R. Fergus, Intriguing properties of neural networks. CoRR. abs/1312.6199: (2013). arXIv.
– reference: X. Chen, C. Liu, B. Li, K. Lu, D. Song, Targeted backdoor attacks on deep learning systems using data poisoning. CoRR. abs/1712.05526: (2017). arXIv.
– reference: Y. Yao, H. Li, H. Zheng, B. Y. Zhao, Regula sub-rosa: latent backdoor attacks on deep neural networks. CoRR. abs/1905.10447: (2019). http://arxiv.org/abs/1905.10447.
– reference: K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, F. Tramèr, A. Prakash, T. Kohno, D. Song, in Proceedings of the 12th USENIX Conference on Offensive Technologies, WOOT’18. Physical adversarial examples for object detectors, (2018), p. 1.
– reference: EidingerE.EnbarR.HassnerT.Age and gender estimation of unfiltered facesIEEE Transactions on Information Forensics and Security20149122170217910.1109/TIFS.2014.2359646
– reference: Y. Gao, C. Xu, D. Wang, S. Chen, D. C. Ranasinghe, S. Nepal, STRIP: a defence against trojan attacks on deep neural networks. CoRR. abs/1902.06531: (2019). http://arxiv.org/abs/1902.06531.
– reference: LecunY.BottouL.BengioY.HaffnerP.Gradient-based learning applied to document recognitionProc. IEEE199886112278232410.1109/5.726791
– reference: M. Barni, K. Kallas, B. Tondi, A new backdoor attack in CNNs by training set corruption without label poisoning. CoRR. abs/1902.11237: (2019). http://arxiv.org/abs/1902.11237.
– reference: B. Tran, J. Li, A. Madry, in Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18. Spectral signatures in backdoor attacks, (2018), pp. 8011–8021.
– reference: M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, B. Li, in 2018 IEEE Symposium on Security and Privacy (SP). Manipulating machine learning: poisoning attacks and countermeasures for regression learning (IEEE, 2018). https://doi.org/10.1109/sp.2018.00057.
– reference: BiggioB.FumeraG.RoliF.Security evaluation of pattern classifiers under attackIEEE Trans. Knowl. Data Eng.201426498499610.1109/TKDE.2013.57
– reference: B. Chen, W. Carvalho, N. Baracaldo, H. Ludwig, B. Edwards, T. Lee, I. Molloy, B. Srivastava, in Artificial Intelligence Safety Workshop @ AAAI. Detecting backdoor attacks on deep neural networks by activation clustering, (2019).
– reference: VGG Face Dataset. http://www.robots.ox.ac.uk/\texttildelowvgg/data/vgg_face/. Accessed 2 June 2019.
– reference: M. Barreno, B. Nelson, R. Sears, A. D. Joseph, J. D. Tygar, in Proceedings of the 2006 ACM Symposium on Information, computer and communications security - ASIACCS ’06. Can machine learning be secure? (ACM Press, 2006). https://doi.org/10.1145/1128817.1128824.
– reference: A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, T. Goldstein, in Conference on Neural Information Processing Systems (NIPS). Poison frogs! targeted clean-label poisoning attacks on neural networks, (2018).
– reference: C. Pasquini, P. Schöttle, R. Böhme, G. Boato, F. Pèrez-Gonzàlez, in Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security - IH&MMSec ’16. Forensics of high quality and nearly identical jpeg image recompression (ACM Press, 2016). https://doi.org/10.1145/2909827.2930787.
– reference: K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, D. Song, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Robust physical-world attacks on deep learning visual classification (IEEE, 2018). https://doi.org/10.1109/cvpr.2018.00175.
– reference: PasquiniC.BöhmeR.Information-theoretic bounds for the forensic detection of downscaled signalsIEEE Trans. Inf. Forens. Secur.20191471928194310.1109/TIFS.2018.2889259
– reference: SharifM.BhagavatulaS.BauerL.ReiterM. K.Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognitionProceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’162016New YorkACM15281540https://doi.org/10.1145/2976749.2978392
– reference: T. Gu, B. Dolan-Gavitt, S. Garg, in Machine Learning and Computer Security (MLSec) NIPS Workshop. Badnets: identifying vulnerabilities in the machine learning model supply chain, (2017).
– reference: W. Guo, L. Wang, X. Xing, M. Du, D. Song, Tabor: A highly accurate approach to inspecting and restoring trojan backdoors in ai systems. ArXiv abs/1908.01763 (2019).
– reference: Y. Liu, S. Ma, Y. Aafer, W. -C. Lee, J. Zhai, W. Wang, X. Zhang, in Proceedings 2018 Network and Distributed System Security Symposium. Trojaning attack on neural networks (Internet Society, 2018). https://doi.org/10.14722/ndss.2018.23291.
– reference: T. B. Brown, D. Mané, A. Roy, M. Abadi, J. Gilmer, Adversarial patch. CoRR. abs/1712.09665: (2017). http://arxiv.org/abs/1712.09665.
– reference: E. Chou, F. Tramèr, G. Pellegrino, D. Boneh, Sentinet: Detecting physical attacks against deep learning systems. CoRR. abs/1812.00292: (2018). http://arxiv.org/abs/1812.00292.
– reference: BarrenoM.NelsonB.JosephA. D.TygarJ. D.The security of machine learningMach. Learn.2010812121148310817710.1007/s10994-010-5188-5
– reference: L. Muñoz-González, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. C. Lupu, F. Roli, in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security - AISec ’17. Towards poisoning of deep learning algorithms with back-gradient optimization (ACM Press, 2017). https://doi.org/10.1145/3128572.3140451.
– reference: O. M. Parkhi, A. Vedaldi, A. Zisserman, in British Machine Vision Conference. Deep face recognition, (2015).
– reference: Y. Taigman, M. Yang, M. Ranzato, L. Wolf, in 2014 IEEE Conference on Computer Vision and Pattern Recognition. Deepface: Closing the gap to human-level performance in face verification (IEEE, 2014). https://doi.org/10.1109/cvpr.2014.220.
– reference: K. Liu, B. Dolan-Gavitt, S. Garg, in Research in Attacks, Intrusions, and Defenses. Fine-pruning: defending against backdooring attacks on deep neural networks (Springer, 2018), pp. 273–294. https://doi.org/10.1007/978-3-030-00470-5_13.
– reference: B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, B. Y. Zhao, in 2019 IEEE Symposium on Security and Privacy (SP). Neural cleanse: identifying and mitigating backdoor attacks in neural networks, (2019), pp. 707–723. https://doi.org/10.1109/SP.2019.00031.
– reference: Vàzquez-PadìnD.Pèrez-GonzàlezF.Comesaña-AlfaroP.A random matrix approach to the forensic analysis of upscaled imagesIEEE Trans. Inf. Forens. Secur.20171292115213010.1109/TIFS.2017.2699638
– reference: C. Liao, H. Zhong, A. C. Squicciarini, S. Zhu, D. J. Miller, Backdoor embedding in convolutional neural network models via invisible perturbation. CoRR. abs/1808.10307: (2018). http://arxiv.org/abs/1808.10307.
– reference: A. Bhalerao, K. Kallas, B. Tondi, M. Barni, in 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP). Luminance-based video backdoor attack against anti-spoofing rebroadcast detection (IEEE, 2019). https://doi.org/10.1109/mmsp.2019.8901711.
– reference: PasquiniC.BoatoG.AlajlanN.De NataleF. G. B.A deterministic approach to detect median filtering in 1D dataIEEE Trans. Inf. Forens. Secur.20161171425143710.1109/TIFS.2016.2530636
– reference: M. Barni, A. Costanzo, E. Nowroozi, B. Tondi, in 2018 25th IEEE International Conference on Image Processing (ICIP). CNN-based detection of generic contrast adjustment with jpeg post-processing (IEEE, 2018). https://doi.org/10.1109/icip.2018.8451698.
– ident: 104_CR18
  doi: 10.1109/ICIP.2019.8802997
– ident: 104_CR26
– ident: 104_CR22
– ident: 104_CR19
  doi: 10.1109/mmsp.2019.8901711
– ident: 104_CR20
– ident: 104_CR37
  doi: 10.1109/cvpr.2018.00175
– volume: 83
  start-page: 2069
  issue: 10
  year: 2003
  ident: 104_CR28
  publication-title: Sig. Process.
  doi: 10.1016/S0165-1684(03)00168-3
– ident: 104_CR35
  doi: 10.1145/2909827.2930787
– ident: 104_CR10
  doi: 10.14722/ndss.2018.23291
– ident: 104_CR16
– volume: 11
  start-page: 1425
  issue: 7
  year: 2016
  ident: 104_CR41
  publication-title: IEEE Trans. Inf. Forens. Secur.
  doi: 10.1109/TIFS.2016.2530636
– start-page: 219
  volume-title: Information Hiding (2nd International Workshop), LNCS 1525
  year: 1998
  ident: 104_CR27
– volume: 7
  start-page: 47230
  year: 2019
  ident: 104_CR14
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2019.2909068
– start-page: 1528
  volume-title: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16
  year: 2016
  ident: 104_CR11
  doi: 10.1145/2976749.2978392
– volume: 9
  start-page: 2170
  issue: 12
  year: 2014
  ident: 104_CR39
  publication-title: IEEE Transactions on Information Forensics and Security
  doi: 10.1109/TIFS.2014.2359646
– ident: 104_CR7
  doi: 10.1145/3128572.3140451
– volume: 26
  start-page: 984
  issue: 4
  year: 2014
  ident: 104_CR3
  publication-title: IEEE Trans. Knowl. Data Eng.
  doi: 10.1109/TKDE.2013.57
– ident: 104_CR31
– ident: 104_CR12
– ident: 104_CR4
  doi: 10.1145/1128817.1128824
– ident: 104_CR6
– ident: 104_CR2
  doi: 10.1109/cvpr.2014.220
– ident: 104_CR1
  doi: 10.7551/mitpress/11474.001.0001
– volume: 81
  start-page: 121
  issue: 2
  year: 2010
  ident: 104_CR5
  publication-title: Mach. Learn.
  doi: 10.1007/s10994-010-5188-5
– volume: 14
  start-page: 1928
  issue: 7
  year: 2019
  ident: 104_CR33
  publication-title: IEEE Trans. Inf. Forens. Secur.
  doi: 10.1109/TIFS.2018.2889259
– ident: 104_CR21
– ident: 104_CR8
  doi: 10.1109/sp.2018.00057
– ident: 104_CR17
– ident: 104_CR24
  doi: 10.1109/SP.2019.00031
– ident: 104_CR38
– volume: 86
  start-page: 2278
  issue: 11
  year: 1998
  ident: 104_CR15
  publication-title: Proc. IEEE
  doi: 10.1109/5.726791
– ident: 104_CR23
  doi: 10.1007/978-3-030-00470-5_13
– ident: 104_CR34
– ident: 104_CR36
– ident: 104_CR25
  doi: 10.1145/3359789.3359790
– ident: 104_CR40
  doi: 10.1109/icip.2018.8451698
– ident: 104_CR13
– volume: 12
  start-page: 2115
  issue: 9
  year: 2017
  ident: 104_CR32
  publication-title: IEEE Trans. Inf. Forens. Secur.
  doi: 10.1109/TIFS.2017.2699638
– ident: 104_CR29
– ident: 104_CR9
– ident: 104_CR30
  doi: 10.5244/C.29.41
SSID ssj0002808905
ssj0064073
Score 2.2615745
Snippet Backdoor attacks against supervised machine learning methods seek to modify the training samples in such a way that, at inference time, the presence of a...
Abstract Backdoor attacks against supervised machine learning methods seek to modify the training samples in such a way that, at inference time, the presence...
SourceID doaj
proquest
crossref
springer
SourceType Open Website
Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1
SubjectTerms Adversarial machine learning
Artificial neural networks
Backdoor attacks
Communications Engineering
Dependable deep learning for security-oriented applications
Engineering
Face recognition
Facial recognition technology
Image processing
Inference
Machine learning
Networks
Neural networks
Object recognition
Security Science and Technology
Signal processing
Signal,Image and Speech Processing
Systems and Data Security
Training
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV25TsQwELXQioKGG7FcckEHFnFMHJuOa0WBVhQL2s5ybEfiStBmoeDrGTsHLBLQkDJxImtm7PcmngOhfStEktEkJZpp7o8ZLRHSMAJoa6XQwsqQ5Xp3nQ6HYjyWN19affmYsLo8cC24I5pJn0ligXdI2EilAEJARRpHQjPGspDmG6XyizP1EH4ZRUJGHRH2p1WsTZkR_KiiDICWeNcpFKgh7zOwFKr3z1DOb6ekAXwGy2ixYY34tJ7tCppzxSpaajsy4GaBrqHxaOKeM59hjqfgdsOXqhPs2jA7DGQPVz5ivW4ZgcscZ9o82rKcVPi-wBfDIfGwZnGujcNdcFFZrKPbweXo_Io0vROIAYSZwtbKGJfMcisiF0sLlwPf6Dg1iTXGeGi2TmcRdzSnXGsNsO28iHNNjY4TtoF6RVm4TYQp0zEzUoC642Ph0gwc6iyJ_cbogCHIPqKt6JRpCov7_hZPKjgYgqta3ArErYK41XsfHXTvvNRlNX4dfeY10o30JbHDDTAU1RiK-stQ-min1adq1mmlgL9w2MbAq-qjw1bHn49_ntLWf0xpGy3EwQY5idkO6k0nr24XzZu36X012QsW_QE4EfIG
  priority: 102
  providerName: Directory of Open Access Journals
– databaseName: SpringerLink
  dbid: C24
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT9wwEB4VygEOUF5iYVv5wK21iOONY3OjUNRDtephW-3Ncmwv4pWgZOHAr2fsTbaAAInmmDivycx8M5kXwL6TMitYllPDjQhhRkelspwi2joljXQqVrn-_ZUPh3I8Vr_borCmy3bvQpJRU0exluKgYRzBkQZ3JzaVofcL8DG0EwuJXMdtjcNF_F2USJXMjeAQqeJducyL13kCSbFz_xNz81mENALP6dr_PfInWG0NTXI044x1-ODLDVjrhjiQVqY3YOVRR8JNGI9qf12EEnUyRb8db9UcEt_l6RG0FkkTUt5nMydINSGFsZeuquqGnJfkZDikARcdmRjryTw7qSq34M_pj9HxT9oOX6AWIWqKuplzobgTTiY-VQ43j87VILeZs9YGbHfeFInwbMKEMQZx32dCDSaGWZNmfBsWy6r0O0AYNym3SiK_pAPp8wI98iJLg2b1aGKoHrCO_tq2ncnDgIwrHT0UKfSMkBoJqSMh9X0Pvs7PuZn15Xhz9ffwWecrQ0_tuKOqz3QropoVKtQsORHeImdKounJZJ4m0nDOC9mDfscUuhX0RqMBJFAPolvWg28dE_w7_Poj7b5v-R4sp5GPBE15Hxan9a3_DEv2bnre1F-iADwACUH9lw
  priority: 102
  providerName: Springer Nature
Title Trembling triggers: exploring the sensitivity of backdoors in DNN-based face recognition
URI https://link.springer.com/article/10.1186/s13635-020-00104-z
https://www.proquest.com/docview/2416040597
https://doaj.org/article/1b96700d656947198105187208a333b8
Volume 2020
WOSCitedRecordID wos000542620900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 2510-523X
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002808905
  issn: 2510-523X
  databaseCode: DOA
  dateStart: 20070101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2510-523X
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002808905
  issn: 2510-523X
  databaseCode: M~E
  dateStart: 20070101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVAVX
  databaseName: SpringerOpen
  customDbUrl:
  eissn: 2510-523X
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002808905
  issn: 2510-523X
  databaseCode: C24
  dateStart: 20071201
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwELZoy4ELBQFioax84AZW1_bGsblUtGyFBEQrVNCKi-XY3qoCkpIsHHrob--M19mqSPRCDj4kTuJkxvPN2PMg5GXQuqh5UTInncJtxsC08ZIB2gajnQ4mRbl-_VhWlV4szDwvuPXZrXKQiUlQh9bjGvk-II0ChgP99-D8F8OqUbi7mktobJEdzFQGfL5zOKvmn9HKUjB7UHsfImW02u-5BHxlaDGlvDTs4gYapaT9NzTNvzZHE-Yc7_7vaB-Q-1nbpG_X7PGQ3InNI7I46eLPGsPQ6Qpugff2b2gcfPEoaIS0R7f2dV0J2i5p7fz30LZdT88a-q6qGGJfoEvnI914ILXNY_LleHZy9J7lAgvMAwytQP5KqYwMKuhJFCbAEcGAmpa-CN57xO8QXT1RkS-5cs4BtsdCmenSce9EIZ-Q7aZt4lNCuXRCeqOBJ8RUx7IGq7suBErPCGqEGRE-_Gjrc_ZxLILxwyYrRCu7Jo4F4thEHHsxIq8295yvc2_c2vsQ6bfpiXmz04m2O7V5GlpeG4xLCgq_ouRGg3rJdSkm2kkpaz0iewM9bZ7Mvb0m5oi8Hjji-vK_h_Ts9qc9J_dE4kXFhNwj26vud3xB7vo_q7O-G2dWHpOtIzEdp7UCaD-UDNpPlzNo58W3K7cE_mI
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB5VWyS48BAglhbwAU5gdW0njo2EEKWtWnWJKrSgvRnH9qIKSNpkAdEfxW9knMdWRaK3HsgxcSI7_jzfjD0PgKdeqbRgaUatsDIeM3qqtBMU2dZrZZXXbZTrx2mW52o-10dr8HuIhYlulYNMbAW1r1zcI99CppEIONR_X5-c0lg1Kp6uDiU0Olgchl8_0WRrXh3s4Pw-43xvd_Z2n_ZVBahD2btEoSOE1MJLryaBa49XQKshyVzqnXORtHywxUQGtmDSWouEFlKpk4VlzvJYJQJF_noiEpmOYH17Nz96H606ias1WgtDZI6SWw0TyOc0WmhtHhx6doH92iIBFzTbvw5jW47bu_W__Z3bcLPXpsmbDv53YC2Ud2E-q8O3IobZkyV2EcfZvCRh8DUkqPGSJrrtd3UzSLUghXVffFXVDTkuyU6e08jtniysC2TlYVWV9-DDlQzmPozKqgwPgDBhuXBaIeZ5okJWSD0pUh7ZIaCapMfAhok1rs-uHot8fDWtlaWk6cBgEAymBYM5G8Pz1TsnXW6RS1tvR7ysWsa84O2Nqv5sejFjWKFj3JWXcRQZ0wrVZ6YyPlFWCFGoMWwO-DG9sGrMOXjG8GJA4Pnjf3fp4eVfewLX92fvpmZ6kB9uwA3ergNJudiE0bL-Hh7BNfdjedzUj_tlRODTVWPzD9MfV9k
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwELaqghAXHgLEQgEf4ATWru2NYyMhBCwrqlarPRS04mIc26kqICnJAqI_jV_HjJNsVSR664EcEyeK42_mm4nnQcjjoHVW8CxnTjqF24yBaeMlA7YNRjsdTMpy_bCfLxZ6tTLLLfJ7yIXBsMpBJyZFHWqP_8jHwDQKAAf277jswyKWs_nL428MO0jhTuvQTqODyF789RPct_bF7gzW-okQ87cHb96xvsMA86CH16CApFRGBhX0JAoT4IjgQUxznwXvPRJYiK6YqMhLrpxzQG4xU2ZaOu6dwI4RoP4v5VOlMZxsmX1E_06B3KLfMOToaDVuuQRmZ-irpYo47OQMD6Z2AWds3L-2ZRPbza__z9_pBrnW29j0VScUN8lWrG6R1UETvxaYfE_X8Iow5_Y5jUMEIgU7mLYYzN9106B1SQvnP4e6blp6VNHZYsGQ8QMtnY90E3dVV7fJ-wuZzB2yXdVVvEsol05IbzRIgpjqmBfKTIpMIGdEMJ7MiPBhka3va65j648vNvleWtkOGBaAYRMw7MmIPN3cc9xVHDl39GvEzmYkVgtPJ-rm0PbKx_LCYDZWUDiLnBsNRjXXuZhoJ6Us9IjsDFiyvQpr7SmQRuTZgMbTy_9-pXvnP-0RuQKAtPu7i7375KpIIqGYkDtke918jw_IZf9jfdQ2D5M8UfLpooH5BylEX24
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Trembling+triggers%3A+exploring+the+sensitivity+of+backdoors+in+DNN-based+face+recognition&rft.jtitle=EURASIP+Journal+on+Information+Security&rft.au=Pasquini%2C+Cecilia&rft.au=B%C3%B6hme+Rainer&rft.date=2020-06-23&rft.pub=Springer+Nature+B.V&rft.issn=1687-4161&rft.eissn=1687-417X&rft.volume=2020&rft.issue=1&rft_id=info:doi/10.1186%2Fs13635-020-00104-z&rft.externalDBID=HAS_PDF_LINK
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2510-523X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2510-523X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2510-523X&client=summon