Tech

Researchers are teaching AI to see more like humans

Share
Share
seeing through someone else's eyes
Credit: Unsplash/CC0 Public Domain

At Brown University, an innovative new project is revealing that teaching artificial intelligence to perceive things more like people may begin with something as simple as a game. The project invites participants to play an online game called Click Me, which helps AI models learn how people see and interpret images. While the game is fun and accessible, its purpose is more ambitious: to understand the root causes of AI errors and to systematically improve how AI systems represent the visual world.

Over the past decade, AI systems have become more powerful and widely used, particularly in tasks like recognizing images. For example, these systems can identify animals, objects or diagnose medical conditions from images. However, they sometimes make mistakes that humans rarely do.

For instance, an AI algorithm might confidently label a photo of a dog wearing sunglasses as a completely different animal or fail to recognize a stop sign if it’s partially covered by graffiti. As these models become larger and more complex, these kinds of errors become more frequent, revealing a growing gap between how AI and humans perceive the world.

Recognizing this challenge, researchers propose to combine insights from psychology and neuroscience with machine learning to create the next generation of human-aligned AI. Their goal is to understand how people process visual information and translate those patterns into algorithms that guide AI systems to act in similar ways.

The Click Me game plays a central role in this vision. In the game, participants click on parts of an image they believe will be most informative for the AI to recognize. The AI only sees the parts of the image that have been clicked. Therefore, players are encouraged to think strategically about the most informative parts of the image rather than clicking at random to maximize the AI’s learning.

The AI-human alignment occurs at a later stage, during which the AI is trained to categorize images. In this “neural harmonization” procedure, the researchers force the AI to focus on the same image features that humans had identified — those clicked during the game — to make sure its visual recognition strategy aligns with that of humans.

What makes this project especially remarkable is how successfully it has engaged the public. The team has attracted thousands of people to participate in Click Me, helping it gain attention across platforms like Reddit and Instagram, and generating tens of millions of interactions with the website to help train the AI model. This type of large-scale public participation allows the research team to rapidly collect data on how people perceive and evaluate visual information.

At the same time, the team has also developed a new computational framework to train AI models using this kind of behavioral data. By aligning AI response times and choices with those of humans, the researchers can build systems that not only match what humans decide, but also how long they take to decide. This leads to a more natural and interpretable decision-making process.

The practical applications of this work are wide-ranging. In medicine, for instance, doctors need to understand and trust the AI tools that assist with diagnoses. If AI systems can explain their conclusions in ways that match human reasoning, they become more reliable and easier to integrate into care.

Similarly, in self-driving cars, AI that better understands how humans make visual decisions can help predict driver behavior and prevent accidents. Beyond these examples, human-aligned AI could improve accessibility tools, educational software and decision support across many industries. Importantly, this work also sheds light on how the human brain works.

By emulating human vision in AI systems, the researchers have been able to develop more accurate models of human visual perception than were previously available.

This initiative underscores why federal support for foundational research matters. Through NSF’s investment, researchers are advancing the science of AI and its relevance to society. The research not only pushes the boundaries of knowledge but also delivers practical tools that can improve the safety and reliability of the technologies we use daily.

Provided by
National Science Foundation


Citation:
Researchers are teaching AI to see more like humans (2025, June 19)
retrieved 19 June 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Your favorite websites might be secretly redirecting you to malware, and ad companies are helping them do it
Tech

Your favorite websites might be secretly redirecting you to malware, and ad companies are helping them do it

Push notifications are now being used as malware delivery systems, and users...

AI GPUs will soon need more power than a small country, as HBM memory growth spirals out of control
Tech

AI GPUs will soon need more power than a small country, as HBM memory growth spirals out of control

Future AI memory chips could demand more power than entire industrial zones...

This family sitcom with 100% on Rotten Tomatoes is consistently among the most-watched shows on Disney+ and I know why
Tech

This family sitcom with 100% on Rotten Tomatoes is consistently among the most-watched shows on Disney+ and I know why

Some shows appear and disappear almost overnight; others become institutions. Modern Family...

Mobile banking users beware – “Godfather” malware is now hijacking official bank apps
Tech

Mobile banking users beware – “Godfather” malware is now hijacking official bank apps

Zimperium spots new version of Godfather among Turkish Android users New version...