Tech

AI image models gain creative edge by amplifying low-frequency features

Share
Share
AI image models gain creative edge by amplifying low-frequency features
Original vs C3 (Ours). Compared to the original diffusion models, Our C3 consistently generates more creative images with no added computational cost. Credit: arXiv (2025). DOI: 10.48550/arxiv.2503.23538

Recently, text-based image generation models can automatically create high-resolution, high-quality images solely from natural language descriptions. However, when a typical example like the Stable Diffusion model is given the text “creative,” its ability to generate truly creative images remains limited.

KAIST researchers have developed a technology that can enhance the creativity of text-based image generation models such as Stable Diffusion without additional training, allowing AI to draw creative chair designs that are far from ordinary.

Professor Jaesik Choi’s research team at KAIST Kim Jaechul Graduate School of AI, in collaboration with NAVER AI Lab, developed this technology to enhance the creative generation of AI generative models without the need for additional training. The work is published on the arXiv preprint server the code is available on GitHub.

Professor Choi’s research team developed a technology to enhance creative generation by amplifying the internal feature maps of text-based image generation models. They also discovered that shallow blocks within the model play a crucial role in creative generation. They confirmed that amplifying values in the high-frequency region after converting feature maps to the frequency domain can lead to noise or fragmented color patterns.

Accordingly, the research team demonstrated that amplifying the low-frequency region of shallow blocks can effectively enhance creative generation.

News at KAIST
Overview of the methodology researched by the development team. After converting the internal feature map of a pre-trained generative model into the frequency domain through Fast Fourier Transform, the low-frequency region of the feature map is amplified, then re-transformed into the feature space via Inverse Fast Fourier Transform to generate an image. Credit: The Korea Advanced Institute of Science and Technology (KAIST)

Considering originality and usefulness as two key elements defining creativity, the research team proposed an algorithm that automatically selects the optimal amplification value for each block within the generative model.

Through the developed algorithm, appropriate amplification of the internal feature maps of a pre-trained Stable Diffusion model was able to enhance creative generation without additional classification data or training.

The research team quantitatively proved, using various metrics, that their developed algorithm can generate images that are more novel than those from existing models, without significantly compromising utility.

In particular, they confirmed an increase in image diversity by mitigating the mode collapse problem that occurs in the SDXL-Turbo model, which was developed to significantly improve the image generation speed of the Stable Diffusion XL (SDXL) model. Furthermore, user studies showed that human evaluation also confirmed a significant improvement in novelty relative to utility compared to existing methods.

News at KAIST
Application examples of the methodology researched by the development team. Various Stable Diffusion models generate novel images compared to existing generations while maintaining the meaning of the generated object. Credit: The Korea Advanced Institute of Science and Technology (KAIST)

Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST and co-first authors of the paper, stated, “This is the first methodology to enhance the creative generation of generative models without new training or fine-tuning. We have shown that the latent creativity within trained AI generative models can be enhanced through feature map manipulation.”

They added, “This research makes it easy to generate creative images using only text from existing trained models. It is expected to provide new inspiration in various fields, such as creative product design, and contribute to the practical and useful application of AI models in the creative ecosystem.”

This research, co-authored by Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST Kim Jaechul Graduate School of AI, was presented on June 16 at the International Conference on Computer Vision and Pattern Recognition (CVPR), an international academic conference.

More information:
Jiyeon Han et al, Enhancing Creative Generation on Stable Diffusion-based Models, arXiv (2025). DOI: 10.48550/arxiv.2503.23538

Journal information:
arXiv


Provided by
The Korea Advanced Institute of Science and Technology (KAIST)


Citation:
AI image models gain creative edge by amplifying low-frequency features (2025, June 20)
retrieved 20 June 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
3D chip stacking method created to overcome traditional semiconductor limitations
Tech

3D chip stacking method created to overcome traditional semiconductor limitations

BBCube: Bumpless Build Cube. A bumpless three-dimensional semiconductor integrating technology can address...

Machine learning model predicts heat-resistant steel durability while preserving data confidentiality
Tech

Machine learning model predicts heat-resistant steel durability while preserving data confidentiality

Distributed learning conducted by each organization enabled the integration of model parameters...

Licensed 3D prints now excluded under Etsy’s handmade policy revision
Tech

Licensed 3D prints now excluded under Etsy’s handmade policy revision

Etsy’s new rules redefine what “handmade” means Sellers surprised as Etsy quietly...