StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery

Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images. However, discovering semantically meaningful latent manipulations typically...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings / IEEE International Conference on Computer Vision s. 2065 - 2074
Hlavní autoři: Patashnik, Or, Wu, Zongze, Shechtman, Eli, Cohen-Or, Daniel, Lischinski, Dani
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.10.2021
Témata:
ISSN:2380-7504
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images. However, discovering semantically meaningful latent manipulations typically involves painstaking human examination of the many degrees of freedom, or an annotated collection of images for each desired manipulation. In this work, we explore leveraging the power of recently introduced Contrastive Language-Image Pre-training (CLIP) models in order to develop a text-based interface for StyleGAN image manipulation that does not require such manual effort. We first introduce an optimization scheme that utilizes a CLIP-based loss to modify an input latent vector in response to a user-provided text prompt. Next, we describe a latent mapper that infers a text-guided latent manipulation step for a given input image, allowing faster and more stable text-based manipulation. Finally, we present a method for mapping text prompts to input-agnostic directions in StyleGAN's style space, enabling interactive text-driven image manipulation. Extensive results and comparisons demonstrate the effectiveness of our approaches.
ISSN:2380-7504
DOI:10.1109/ICCV48922.2021.00209