SPG: Style‐Prompting Guidance for Style‐Specific Content Creation.

Gespeichert in:
Bibliographische Detailangaben
Titel: SPG: Style‐Prompting Guidance for Style‐Specific Content Creation.
Autoren: Liang, Qian1 (AUTHOR), Chen, Zichong1 (AUTHOR), Zhou, Yang1 (AUTHOR) zhouyangvcc@gmail.com, Huang, Hui1 (AUTHOR)
Quelle: Computer Graphics Forum. Oct2025, Vol. 44 Issue 7, p1-10. 10p.
Schlagwörter: ARTISTIC creation
Abstract: Although recent text‐to‐image (T2I) diffusion models excel at aligning generated images with textual prompts, controlling the visual style of the output remains a challenging task. In this work, we propose Style‐Prompting Guidance (SPG), a novel sampling strategy for style‐specific image generation. SPG constructs a style noise vector and leverages its directional deviation from unconditional noise to guide the diffusion process toward the target style distribution. By integrating SPG with Classifier‐Free Guidance (CFG), our method achieves both semantic fidelity and style consistency. SPG is simple, robust, and compatible with controllable frameworks like ControlNet and IPAdapter, making it practical and widely applicable. Extensive experiments demonstrate the effectiveness and generality of our approach compared to state‐of‐the‐art methods. Code is available at https://github.com/Rumbling281441/SPG. [ABSTRACT FROM AUTHOR]
Copyright of Computer Graphics Forum is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Datenbank: Business Source Index
Beschreibung
Abstract:Although recent text‐to‐image (T2I) diffusion models excel at aligning generated images with textual prompts, controlling the visual style of the output remains a challenging task. In this work, we propose Style‐Prompting Guidance (SPG), a novel sampling strategy for style‐specific image generation. SPG constructs a style noise vector and leverages its directional deviation from unconditional noise to guide the diffusion process toward the target style distribution. By integrating SPG with Classifier‐Free Guidance (CFG), our method achieves both semantic fidelity and style consistency. SPG is simple, robust, and compatible with controllable frameworks like ControlNet and IPAdapter, making it practical and widely applicable. Extensive experiments demonstrate the effectiveness and generality of our approach compared to state‐of‐the‐art methods. Code is available at https://github.com/Rumbling281441/SPG. [ABSTRACT FROM AUTHOR]
ISSN:01677055
DOI:10.1111/cgf.70251