A CycleGAN Watermarking Method for Ownership Verification

Uložené v:
Podrobná bibliografia
Názov: A CycleGAN Watermarking Method for Ownership Verification
Autori: Dongdong Lin, Benedetta Tondi, Bin Li, Mauro Barni
Prispievatelia: Lin, D., Tondi, B., Li, B., Barni, M.
Zdroj: IEEE Transactions on Dependable and Secure Computing. 22:1040-1054
Informácie o vydavateľovi: Institute of Electrical and Electronics Engineers (IEEE), 2025.
Rok vydania: 2025
Predmety: CycleGAN, Data models, Decoding, DNN model watermarking, GAN watermarking, Generative adversarial networks, Generators, Intellectual property rights protection, Robustness, surrogate model attack, Training, Watermarking, Generative adversarial networks, Decoding, Data models, Watermarking, Generators, GAN watermarking, CycleGAN, DNN model watermarking, Training, Robustness, Intellectual property rights protection, surrogate model attack
Popis: Due to the widespread use and proliferation of Deep Neural Networks (DNNs), safeguarding their Intellectual Property Rights (IPR) has become increasingly important. This paper proposes a method for watermarking a cyclic Generative Adversarial Network (GAN), specifically CycleGAN, to address the gap between the watermarking of conventional GAN models and cyclic GAN watermarking. The proposed method involves training a watermark decoder, which is then frozen and used to extract the watermark bits during the training of the CycleGAN model. The model is trained using specific loss functions that are optimized to achieve excellent performance on both the Image to-Image Translation (I2IT) task and watermark embedding. Besides, a comprehensive theoretical and practical statistical analysis to verify the ownership of the model from the extracted watermark bits is given. At last, the model's robustness is evaluated against image post-processing, and further improved by finetuning the watermark decoder by applying data augmentation to the generated images before extracting the watermark bits. We also verify the robustness of the watermark to surrogate model attacks, carried out by accessing the watermarked model in a black-box modality. The experimental results demonstrate that the proposed method is effective and robust against image post-processing and can resist surrogate model attacks.
Druh dokumentu: Article
Popis súboru: application/pdf; ELETTRONICO
ISSN: 2160-9209
1545-5971
DOI: 10.1109/tdsc.2024.3424900
Prístupová URL adresa: https://ieeexplore.ieee.org/document/10591362
https://doi.org/10.1109/TDSC.2024.3424900
https://hdl.handle.net/11365/1267395
Rights: IEEE Copyright
Prístupové číslo: edsair.doi.dedup.....bc7d5af8a8bf7bee6a7ba734a7d26be7
Databáza: OpenAIRE
Popis
Abstrakt:Due to the widespread use and proliferation of Deep Neural Networks (DNNs), safeguarding their Intellectual Property Rights (IPR) has become increasingly important. This paper proposes a method for watermarking a cyclic Generative Adversarial Network (GAN), specifically CycleGAN, to address the gap between the watermarking of conventional GAN models and cyclic GAN watermarking. The proposed method involves training a watermark decoder, which is then frozen and used to extract the watermark bits during the training of the CycleGAN model. The model is trained using specific loss functions that are optimized to achieve excellent performance on both the Image to-Image Translation (I2IT) task and watermark embedding. Besides, a comprehensive theoretical and practical statistical analysis to verify the ownership of the model from the extracted watermark bits is given. At last, the model's robustness is evaluated against image post-processing, and further improved by finetuning the watermark decoder by applying data augmentation to the generated images before extracting the watermark bits. We also verify the robustness of the watermark to surrogate model attacks, carried out by accessing the watermarked model in a black-box modality. The experimental results demonstrate that the proposed method is effective and robust against image post-processing and can resist surrogate model attacks.
ISSN:21609209
15455971
DOI:10.1109/tdsc.2024.3424900