ConceptLab: Creative Concept Generation using VLM-Guided Diffusion Prior Constraints.

Uložené v:
Podrobná bibliografia
Názov: ConceptLab: Creative Concept Generation using VLM-Guided Diffusion Prior Constraints.
Autori: RICHARDSON, ELAD, GOLDBERG, KFIR, ALALUF, YUVAL, COHEN-OR, DANIEL
Zdroj: ACM Transactions on Graphics; Jun2024, Vol. 43 Issue 3, p1-14, 14p
Predmety: PETS
Abstrakt: Recent text-to-image generative models have enabled us to transform our words into vibrant, captivating imagery. The surge of personalization techniques that has followed has also allowed us to imagine unique concepts in new scenes. However, an intriguing question remains: How can we generate a new, imaginary concept that has never been seen before? In this article, we present the task of creative text-to-image generation, where we seek to generate new members of a broad category (e.g., generating a pet that differs from all existing pets). We leverage the under-studied Diffusion Prior models and show that the creative generation problem can be formulated as an optimization process over the output space of the diffusion prior, resulting in a set of "prior constraints." To keep our generated concept from converging into existing members, we incorporate a question-answering Vision-Language Model that adaptively adds new constraints to the optimization problem, encouraging the model to discover increasingly more unique creations. Finally, we show that our prior constraints can also serve as a strong mixing mechanism allowing us to create hybrids between generated concepts, introducing even more flexibility into the creative process. [ABSTRACT FROM AUTHOR]
Copyright of ACM Transactions on Graphics is the property of Association for Computing Machinery and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Databáza: Complementary Index
Popis
Abstrakt:Recent text-to-image generative models have enabled us to transform our words into vibrant, captivating imagery. The surge of personalization techniques that has followed has also allowed us to imagine unique concepts in new scenes. However, an intriguing question remains: How can we generate a new, imaginary concept that has never been seen before? In this article, we present the task of creative text-to-image generation, where we seek to generate new members of a broad category (e.g., generating a pet that differs from all existing pets). We leverage the under-studied Diffusion Prior models and show that the creative generation problem can be formulated as an optimization process over the output space of the diffusion prior, resulting in a set of "prior constraints." To keep our generated concept from converging into existing members, we incorporate a question-answering Vision-Language Model that adaptively adds new constraints to the optimization problem, encouraging the model to discover increasingly more unique creations. Finally, we show that our prior constraints can also serve as a strong mixing mechanism allowing us to create hybrids between generated concepts, introducing even more flexibility into the creative process. [ABSTRACT FROM AUTHOR]
ISSN:07300301
DOI:10.1145/3659578