Tech

Six-finger hands, two suns and Jesus Christ on a surfboard in a stormy sea

Share
Share
image of computer screen with ai screen on it connected to a big energy source
Credit: AI-generated image

When teaching a Photoshop class at a children’s summer camp, Stevens undergraduate student Gursimran Vasir noticed something strange.

When children searched for images using Photoshop’s AI feature by typing text prompts, they didn’t always get back what they expected. In fact, many images appeared skewed, incorrect or biased. Vasir experienced similar issues herself. For example, when prompting the AI for an image of a “cleaning person,” she would get back a picture of a woman cleaning. When asked for a “woman cleaning” image, the AI would generate a picture of a white woman, oftentimes cleaning a countertop with a sponge or a spray bottle.

“A lot of kids were struggling with AI because it wasn’t exactly giving them what they wanted,” Vasir says. “But they didn’t know what language to use to express their difficulties with the situation.”

She realized that there was no standardized language to describe AI errors and biases, and thought creating one would benefit future AI systems. She proposed to begin developing such language to Stevens Associate Professor Jina Huh-Yoo, a human-computer interaction (HCI) researcher, who studies emerging technologies, such as AI, to support health and well-being.

The result was a study titled Characterizing the Flaws of Image-Based AI-Generated Content, presented as a work-in-progress at the ACM CHI conference on Human Factors in Computing Systems on April 26, 2025.

For the study, Vasir collected and examined 482 Reddit posts where users described various AI-generated image blunders. She broke her findings into four categories: AI surrealism, cultural bias, logical fallacy and misinformation.

AI surrealism, she explains, is when something in the image is registering as not quite real, creating a feeling of unease about it—such as it looking too smooth or the colors being too perfect. AI’s cultural bias was apparent when a user prompted the tool to depict Jesus Christ walking on water in a stormy sea and received an image of Christ on a surfboard in a stormy sea. Asking for an image of a “cleaning person” and consistently receiving images of a woman cleaning, rather than a more gender-diverse result, is another example of a cultural bias, Vasir says.

The misinformation category refers, for example, to incorrectly depicting a city that the user asked for—generating images that don’t look like the city at all. Finally, the logical fallacy is when the algorithm returns something that does not reflect standard understanding.

“Let’s say, you ask for an image of a hand and receive one that has six fingers,” explains Vasir. “Or you ask for an image of a landscape and receive one that has two suns.”

Huh-Yoo notes that this study investigates a previously little-researched topic of AI errors in images versus text output.

“I think this is a very unique, novel work that’s adding to the discussion of the conversations around AI biases, because the existing conversations were mostly focused on text, and this effort advances onto the images,” says Huh-Yoo.

Overall, she says she is very impressed with Stevens undergraduate students’ focus on research and the quality of their efforts. “Gursimran took the lead in this project and developed the research questions and the methods herself. I just guided her through it.”

The work generated a lot of interest from the industry players, says Huh-Yoo. “This is a hot topic in the design and graphics industry,” she explains, because they are facing similar challenges with AI-generated content.

As AI adoption increases, whether for marketing, education, travel or any other use, users will expect to receive information and images that are correct and bias-free, Vasir points out. Having the proper terms and language to describe the current issues AI is having will help train it to generate images appropriately.

“Developers owe users adequate technology that functions as intended,” says Vasir. “When we have tools that do not do so, it leaves more room for misuse. Creating the proper vocabulary to open a dialog between the user and the developer is the first step in fixing these problems.”

More information:
Gursimran Vasir et al, Characterizing the Flaws of Image-Based AI-Generated Content, Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (2025). DOI: 10.1145/3706599.3720004

Provided by
Stevens Institute of Technology


Citation:
AI blunders: Six-finger hands, two suns and Jesus Christ on a surfboard in a stormy sea (2025, June 26)
retrieved 26 June 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Most organisations are at risk thanks to immature supply chain security
Tech

Most organisations are at risk thanks to immature supply chain security

Supply chain attacks are becoming more frequent and more dangerous Many security...

North Korean hackers are hijacking Zoom calls to steal your crypto with scripts buried 10,000 lines deep
Tech

North Korean hackers are hijacking Zoom calls to steal your crypto with scripts buried 10,000 lines deep

Fake Zoom scripts launch malware hidden beneath thousands of lines of code...

I sure hope my PS5 doesn’t die before GTA 6 – reports suggest hardware issues could damage consoles over time
Tech

I sure hope my PS5 doesn’t die before GTA 6 – reports suggest hardware issues could damage consoles over time

Alderon Games founder Matthew Cassells reports supposed liquid metal issues on PS5...