Computational Color Constancy-Based Backdoor Attacks

Deep neural networks (DNNs) have become an integral part of many computer vision tasks. However, training complex neural networks requires a large amount of computational resources. Therefore, many users outsource training to third parties. This introduces an attack vector for backdoor attacks. Thes...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2023 International Symposium on Image and Signal Processing and Analysis (ISPA) s. 1 - 6
Hlavní autoři: Vrsnak, Donik, Sabolic, Ivan, Subasic, Marko, Loncaric, Sven
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 18.09.2023
Témata:
ISSN:1849-2266
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Deep neural networks (DNNs) have become an integral part of many computer vision tasks. However, training complex neural networks requires a large amount of computational resources. Therefore, many users outsource training to third parties. This introduces an attack vector for backdoor attacks. These attacks are described as attacks in which the neural network behaves as expected for benign inputs but acts maliciously when a backdoor trigger is present in the input. Triggers are small, preferably stealthy additions to the input. However, most of these triggers are based on the additive model, i.e., the trigger is simply added onto the image. Furthermore, optimized triggers are artificial, which means that it is difficult or impossible to reproduce them in the real-world, making them impractical to use in a real-world setting. In this work, we present a novel way of trigger injection for the classification problem. It is based on the von Kries model for image color correction, a frequently used component in all image processing pipelines. Our trigger uses a multiplicative rather than an additive model. This makes it harder to detect the injection by defensive methods. Second, the trigger is based on real-world phenomena of changing illumination. Finally, it can be made harder to spot by a human observer, when compared to some additive triggers. We test the performance of our attack strategy against various defense methods on several frequently used datasets, and achieve excellent results. Furthermore, we show that the malicious behavior of models trained on artificially colored images can be activated in real-world scenarios, further increasing the usefulness of our attack strategy.
AbstractList Deep neural networks (DNNs) have become an integral part of many computer vision tasks. However, training complex neural networks requires a large amount of computational resources. Therefore, many users outsource training to third parties. This introduces an attack vector for backdoor attacks. These attacks are described as attacks in which the neural network behaves as expected for benign inputs but acts maliciously when a backdoor trigger is present in the input. Triggers are small, preferably stealthy additions to the input. However, most of these triggers are based on the additive model, i.e., the trigger is simply added onto the image. Furthermore, optimized triggers are artificial, which means that it is difficult or impossible to reproduce them in the real-world, making them impractical to use in a real-world setting. In this work, we present a novel way of trigger injection for the classification problem. It is based on the von Kries model for image color correction, a frequently used component in all image processing pipelines. Our trigger uses a multiplicative rather than an additive model. This makes it harder to detect the injection by defensive methods. Second, the trigger is based on real-world phenomena of changing illumination. Finally, it can be made harder to spot by a human observer, when compared to some additive triggers. We test the performance of our attack strategy against various defense methods on several frequently used datasets, and achieve excellent results. Furthermore, we show that the malicious behavior of models trained on artificially colored images can be activated in real-world scenarios, further increasing the usefulness of our attack strategy.
Author Sabolic, Ivan
Loncaric, Sven
Subasic, Marko
Vrsnak, Donik
Author_xml – sequence: 1
  givenname: Donik
  surname: Vrsnak
  fullname: Vrsnak, Donik
  email: donik.vrsnak@fer.hr
  organization: University of Zagreb,Faculty of Electrical Engineering and Computing
– sequence: 2
  givenname: Ivan
  surname: Sabolic
  fullname: Sabolic, Ivan
  email: ivan.sabolic@fer.hr
  organization: University of Zagreb,Faculty of Electrical Engineering and Computing
– sequence: 3
  givenname: Marko
  surname: Subasic
  fullname: Subasic, Marko
  email: marko.subasic@fer.hr
  organization: University of Zagreb,Faculty of Electrical Engineering and Computing
– sequence: 4
  givenname: Sven
  surname: Loncaric
  fullname: Loncaric, Sven
  email: sven.loncaric@fer.hr
  organization: University of Zagreb,Faculty of Electrical Engineering and Computing
BookMark eNo1T8tKw0AUHUXBWvMHgv2BxLnznmUarBYKLajrMq9AMM2UzLjo3zugbs7rXg6ce3QzxSkg9AS4AcD6eft-aLmiHBqCCW0AE6mEZleo0lKXHFPgVNBrtADFdE2IEHeoSmmwmCmOWXlaINbF0_k7mzzEyYyrLo5xLjilbCZ3qdcmBb9aG_flYzm0OReZHtBtb8YUqj9eos_Ny0f3Vu_2r9uu3dUDgM61leCcpLZnxCnuwAmwxhhwstfEG9lbD0oQyUB5KNZZwRjHgkqhIGhPl-jxt3cIIRzP83Ay8-X4P5T-AEgpSWk
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/ISPA58351.2023.10278694
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9798350315363
EISSN 1849-2266
EndPage 6
ExternalDocumentID 10278694
Genre orig-research
GroupedDBID 6IE
6IL
ABLEC
ALMA_UNASSIGNED_HOLDINGS
CBEJK
IEGSK
RIE
RIL
ID FETCH-LOGICAL-i119t-b71cc73bf42c85c1c61baaa1c7f92da7fbd18627418d1a7fcb64450637681e9d3
IEDL.DBID RIE
IngestDate Wed Jun 26 19:24:08 EDT 2024
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i119t-b71cc73bf42c85c1c61baaa1c7f92da7fbd18627418d1a7fcb64450637681e9d3
PageCount 6
ParticipantIDs ieee_primary_10278694
PublicationCentury 2000
PublicationDate 2023-Sept.-18
PublicationDateYYYYMMDD 2023-09-18
PublicationDate_xml – month: 09
  year: 2023
  text: 2023-Sept.-18
  day: 18
PublicationDecade 2020
PublicationTitle 2023 International Symposium on Image and Signal Processing and Analysis (ISPA)
PublicationTitleAbbrev ISPA
PublicationYear 2023
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssib048504798
ssib042470063
Score 1.8449013
Snippet Deep neural networks (DNNs) have become an integral part of many computer vision tasks. However, training complex neural networks requires a large amount of...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Additives
Computational modeling
Computer vision
Image color analysis
Lighting
Pipelines
Training
Title Computational Color Constancy-Based Backdoor Attacks
URI https://ieeexplore.ieee.org/document/10278694
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3LSgMxFA22uHClYsU3s3CbOskkk2TZFkVBSsEH3ZXkJgNFaKVOBf_em8y04sKFuzwg5H1PkntyCLkGhBXAhaRVoQwVMuQUUX1OnXWgHa6nPKk1vD6q8VhPp2bSktUTFyaEkJzPQj8G01u-X8I6XpXhCudKl0Z0SEepsiFrbSaP4EJFe7uNaxl_T9etTxfLzc3D02QgEXHEYyEv-pvSfumqJLNyt__PCh2Q3g9BL5tsTc8h2QmLIyIahYb2di8b4ba2ykYN_oMvOkR75bOhhTe_xIxBXUd6fY-83N0-j-5pK4pA54yZmjrFAFThKsFBS2BQMmetZaAqw71VlfNMJ0Ed7RlGwSHikdgxeK5gwfjimHQXy0U4IZkJleWSW2krK6QvndQYAMsqTOUqPyW92OTZe_PvxWzT2rM_0s_JXuzY6E3B9AXp1qt1uCS78FnPP1ZXabS-AXFBlBQ
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3LSgMxFA1aBV2pWPHtLNymTjJJkyzbYmmxloJVuivJTQaK0JY6Ffx7k8y04sKFuzwg5H1PkntyELoHDyuAMo7zTCjMuEuxR_UpNtqANH49pVGt4W0ghkM5mahRRVaPXBjnXHQ-c40QjG_5dgHrcFXmVzgVsqnYLtrjjNG0pGttpg-jTASLu41LHv5Pl5VXF0nVQ_9l1OIec4SDIc0am_J-KatEw9I9-meVjlH9h6KXjLbG5wTtuPkpYqVGQ3W_l3T8xrZKOiUChC_c9hbLJm0N73bhM1pFEQj2dfTafRx3eriSRcAzQlSBjSAAIjM5oyA5EGgSo7UmIHJFrRa5sURGSR1piY-C8ZiH-47xJwvilM3OUG2-mLtzlCiXa8qp5jrXjNum4dIHQJPcp1KRXqB6aPJ0Wf58Md209vKP9Dt00Bs_D6aD_vDpCh2GTg6-FUReo1qxWrsbtA-fxexjdRtH7htY-Zdb
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2023+International+Symposium+on+Image+and+Signal+Processing+and+Analysis+%28ISPA%29&rft.atitle=Computational+Color+Constancy-Based+Backdoor+Attacks&rft.au=Vrsnak%2C+Donik&rft.au=Sabolic%2C+Ivan&rft.au=Subasic%2C+Marko&rft.au=Loncaric%2C+Sven&rft.date=2023-09-18&rft.pub=IEEE&rft.eissn=1849-2266&rft.spage=1&rft.epage=6&rft_id=info:doi/10.1109%2FISPA58351.2023.10278694&rft.externalDocID=10278694