MAID: Model Attribution via Inverse Diffusion

The surge in AI-generated images, blending authentic and synthetic content, raises security concerns and complicates model attribution, especially with limited transparency. Existing methods either struggle to attribute across multiple frameworks or rely on additional conditions, such as textual des...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) s. 1 - 5
Hlavní autoři: Zhu, Luyu, Ye, Kai, Yao, Jiayu, Li, Chenxi, Zhao, Luwen, Cao, Yuxin, Wang, Derui, Hao, Jie
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 06.04.2025
Témata:
ISSN:2379-190X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The surge in AI-generated images, blending authentic and synthetic content, raises security concerns and complicates model attribution, especially with limited transparency. Existing methods either struggle to attribute across multiple frameworks or rely on additional conditions, such as textual descriptions and white-box access to the source model, limiting their practicality and effectiveness in real-world scenarios. To address this gap, we introduce Model Attribution via Inverse Diffusion (MAID), the first framework-agnostic and self-sufficient approach that leverages the source model features extracted by diffusion models, which also works for images generated from GANs. By employing the inverse diffusion process, we are able to utilize pre-trained Diffusion Models as Denoising Autoencoders, mapping images into a latent space and extracting the Diffusion Model Activations (DMA). This mapping effectively captures the unique characteristics of images originating from different source models, including authentic images, which showcase distinct latent Gaussian signatures. Experimental results show that, even in data-asymmetric unfair comparisons, the attribution classifier trained with our proposed DMA achieves approximately 15% and 3% higher ACC compared to SOTA methods on the DiffusionForensics and Artifact datasets, respectively. The code is available at https://github.com/Zhu-Luyu/MAID.
ISSN:2379-190X
DOI:10.1109/ICASSP49660.2025.10888869