Computational Color Constancy-Based Backdoor Attacks

Deep neural networks (DNNs) have become an integral part of many computer vision tasks. However, training complex neural networks requires a large amount of computational resources. Therefore, many users outsource training to third parties. This introduces an attack vector for backdoor attacks. Thes...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2023 International Symposium on Image and Signal Processing and Analysis (ISPA) S. 1 - 6
Hauptverfasser: Vrsnak, Donik, Sabolic, Ivan, Subasic, Marko, Loncaric, Sven
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 18.09.2023
Schlagworte:
ISSN:1849-2266
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Deep neural networks (DNNs) have become an integral part of many computer vision tasks. However, training complex neural networks requires a large amount of computational resources. Therefore, many users outsource training to third parties. This introduces an attack vector for backdoor attacks. These attacks are described as attacks in which the neural network behaves as expected for benign inputs but acts maliciously when a backdoor trigger is present in the input. Triggers are small, preferably stealthy additions to the input. However, most of these triggers are based on the additive model, i.e., the trigger is simply added onto the image. Furthermore, optimized triggers are artificial, which means that it is difficult or impossible to reproduce them in the real-world, making them impractical to use in a real-world setting. In this work, we present a novel way of trigger injection for the classification problem. It is based on the von Kries model for image color correction, a frequently used component in all image processing pipelines. Our trigger uses a multiplicative rather than an additive model. This makes it harder to detect the injection by defensive methods. Second, the trigger is based on real-world phenomena of changing illumination. Finally, it can be made harder to spot by a human observer, when compared to some additive triggers. We test the performance of our attack strategy against various defense methods on several frequently used datasets, and achieve excellent results. Furthermore, we show that the malicious behavior of models trained on artificially colored images can be activated in real-world scenarios, further increasing the usefulness of our attack strategy.
AbstractList Deep neural networks (DNNs) have become an integral part of many computer vision tasks. However, training complex neural networks requires a large amount of computational resources. Therefore, many users outsource training to third parties. This introduces an attack vector for backdoor attacks. These attacks are described as attacks in which the neural network behaves as expected for benign inputs but acts maliciously when a backdoor trigger is present in the input. Triggers are small, preferably stealthy additions to the input. However, most of these triggers are based on the additive model, i.e., the trigger is simply added onto the image. Furthermore, optimized triggers are artificial, which means that it is difficult or impossible to reproduce them in the real-world, making them impractical to use in a real-world setting. In this work, we present a novel way of trigger injection for the classification problem. It is based on the von Kries model for image color correction, a frequently used component in all image processing pipelines. Our trigger uses a multiplicative rather than an additive model. This makes it harder to detect the injection by defensive methods. Second, the trigger is based on real-world phenomena of changing illumination. Finally, it can be made harder to spot by a human observer, when compared to some additive triggers. We test the performance of our attack strategy against various defense methods on several frequently used datasets, and achieve excellent results. Furthermore, we show that the malicious behavior of models trained on artificially colored images can be activated in real-world scenarios, further increasing the usefulness of our attack strategy.
Author Sabolic, Ivan
Loncaric, Sven
Subasic, Marko
Vrsnak, Donik
Author_xml – sequence: 1
  givenname: Donik
  surname: Vrsnak
  fullname: Vrsnak, Donik
  email: donik.vrsnak@fer.hr
  organization: University of Zagreb,Faculty of Electrical Engineering and Computing
– sequence: 2
  givenname: Ivan
  surname: Sabolic
  fullname: Sabolic, Ivan
  email: ivan.sabolic@fer.hr
  organization: University of Zagreb,Faculty of Electrical Engineering and Computing
– sequence: 3
  givenname: Marko
  surname: Subasic
  fullname: Subasic, Marko
  email: marko.subasic@fer.hr
  organization: University of Zagreb,Faculty of Electrical Engineering and Computing
– sequence: 4
  givenname: Sven
  surname: Loncaric
  fullname: Loncaric, Sven
  email: sven.loncaric@fer.hr
  organization: University of Zagreb,Faculty of Electrical Engineering and Computing
BookMark eNo1T8tKw0AUHUXBWvMHgv2BxLnznmUarBYKLajrMq9AMM2UzLjo3zugbs7rXg6ce3QzxSkg9AS4AcD6eft-aLmiHBqCCW0AE6mEZleo0lKXHFPgVNBrtADFdE2IEHeoSmmwmCmOWXlaINbF0_k7mzzEyYyrLo5xLjilbCZ3qdcmBb9aG_flYzm0OReZHtBtb8YUqj9eos_Ny0f3Vu_2r9uu3dUDgM61leCcpLZnxCnuwAmwxhhwstfEG9lbD0oQyUB5KNZZwRjHgkqhIGhPl-jxt3cIIRzP83Ay8-X4P5T-AEgpSWk
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/ISPA58351.2023.10278694
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Xplore POP ALL
IEEE Xplore All Conference Proceedings
IEEE Xplore
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE/IET Electronic Library (IEL) (UW System Shared)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9798350315363
EISSN 1849-2266
EndPage 6
ExternalDocumentID 10278694
Genre orig-research
GroupedDBID 6IE
6IL
ABLEC
ALMA_UNASSIGNED_HOLDINGS
CBEJK
IEGSK
RIE
RIL
ID FETCH-LOGICAL-i119t-b71cc73bf42c85c1c61baaa1c7f92da7fbd18627418d1a7fcb64450637681e9d3
IEDL.DBID RIE
IngestDate Wed Jun 26 19:24:08 EDT 2024
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i119t-b71cc73bf42c85c1c61baaa1c7f92da7fbd18627418d1a7fcb64450637681e9d3
PageCount 6
ParticipantIDs ieee_primary_10278694
PublicationCentury 2000
PublicationDate 2023-Sept.-18
PublicationDateYYYYMMDD 2023-09-18
PublicationDate_xml – month: 09
  year: 2023
  text: 2023-Sept.-18
  day: 18
PublicationDecade 2020
PublicationTitle 2023 International Symposium on Image and Signal Processing and Analysis (ISPA)
PublicationTitleAbbrev ISPA
PublicationYear 2023
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssib048504798
ssib042470063
Score 1.8447973
Snippet Deep neural networks (DNNs) have become an integral part of many computer vision tasks. However, training complex neural networks requires a large amount of...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Additives
Computational modeling
Computer vision
Image color analysis
Lighting
Pipelines
Training
Title Computational Color Constancy-Based Backdoor Attacks
URI https://ieeexplore.ieee.org/document/10278694
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NSwMxEB1s8eBJxYrf7MFr6mY3u0mObbHopRRU6K0kM1koQit1K_Tfm2S3FQ8evG02EPI9b5J5eQD3hSuNKlNiZFLDBJFgGsmx0ggyKlzMoYliE3IyUbOZnrZk9ciFcc7F4DPXD5_xLp9WuAlHZX6FZ1KVWnSgI2XZkLV2k0dkQgZ7u0-rIryertqYLp7qh-eX6aDwiCO4hVne35X2S1clmpXx8T8rdAK9H4JeMt2bnlM4cMszEI1CQ3u6l4z8trZORg3-wy0bentFydDgO618xqCuA72-B2_jx9fRE2tFEdiCc10zKzmizG0lMlQFciy5NcZwlJXOyMjKEldRUEcR90m0HvEUvmO8X8GdpvwcusvV0l1AQkJaYVLppPcSHZHNC5tZqrDyINHDyEvohSbPP5p3L-a71l798f8ajkLHhmgKrm6gW6837hYO8atefK7v4mh9A_dolM8
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NSwMxEA1aBT2pWPHbPXhN3WSTTXJsi6XFWgpW6K0kmSwUoZW6Ffz3JtltxYMHb5sNhHzPm2ReHkL33OVa5ilg0KnGDIBhZcHhXDPQMlzMWR3FJsRoJKdTNa7J6pEL45yLwWeuFT7jXT4s7ToclfkVToXMFdtFe5wxmlZ0rc30YZSJYHG3acnD--myjuoiqXoYvIzb3GOO4BjSrLUp75eySjQsvaN_VukYNX8oesl4a3xO0I5bnCJWaTTU53tJ129sq6RbIUD7hTveYkHS0fYNlj6jXZaBYN9Er73HSbePa1kEPCdEldgIYq3ITMGoldwSmxOjtSZWFIqCFoUBIqOkjgTik9Z4zMN9x3jPgjgF2RlqLJYLd44SYMIwnQonvJ_oAEzGDTVQ2MLDRA8kL1AzNHn2Xr18Mdu09vKP_3fooD95Hs6Gg9HTFToMnRxiK4i8Ro1ytXY3aN9-lvOP1W0cuW_pYpgW
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2023+International+Symposium+on+Image+and+Signal+Processing+and+Analysis+%28ISPA%29&rft.atitle=Computational+Color+Constancy-Based+Backdoor+Attacks&rft.au=Vrsnak%2C+Donik&rft.au=Sabolic%2C+Ivan&rft.au=Subasic%2C+Marko&rft.au=Loncaric%2C+Sven&rft.date=2023-09-18&rft.pub=IEEE&rft.eissn=1849-2266&rft.spage=1&rft.epage=6&rft_id=info:doi/10.1109%2FISPA58351.2023.10278694&rft.externalDocID=10278694