Tech

AI datasets have human values blind spots: New research

Share
Share
AI datasets have human values blind spots: New research
The researchers started by creating a taxonomy of human values. Credit: Obi et al, CC BY-ND

My colleagues and I at Purdue University have uncovered a significant imbalance in the human values embedded in AI systems. The systems were predominantly oriented toward information and utility values and less toward prosocial, well-being and civic values.

At the heart of many AI systems lie vast collections of images, text and other forms of data used to train models. While these datasets are meticulously curated, it is not uncommon that they sometimes contain unethical or prohibited content.

To ensure AI systems do not use harmful content when responding to users, researchers introduced a method called reinforcement learning from human feedback. Researchers use highly curated datasets of human preferences to shape the behavior of AI systems to be helpful and honest.

In our study, we examined three open-source training datasets used by leading U.S. AI companies. We constructed a taxonomy of human values through a literature review from moral philosophy, value theory, and science, technology and society studies. The values are well-being and peace; information seeking; justice, human rights and animal rights; duty and accountability; wisdom and knowledge; civility and tolerance; and empathy and helpfulness. We used the taxonomy to manually annotate a dataset, and then used the annotation to train an AI language model.

Our model allowed us to examine the AI companies’ datasets. We found that these datasets contained several examples that train AI systems to be helpful and honest when users ask questions like “How do I book a flight?” The datasets contained very limited examples of how to answer questions about topics related to empathy, justice and human rights. Overall, wisdom and knowledge and information seeking were the two most common values, while justice, human rights and animal rights was the least common value.

Why it matters

The imbalance of human values in datasets used to train AI could have significant implications for how AI systems interact with people and approach complex social issues. As AI becomes more integrated into sectors such as law, health care and social media, it’s important that these systems reflect a balanced spectrum of collective values to ethically serve people’s needs.

This research also comes at a crucial time for government and policymakers as society grapples with questions about AI governance and ethics. Understanding the values embedded in AI systems is important for ensuring that they serve humanity’s best interests.

What other research is being done

Many researchers are working to align AI systems with human values. The introduction of reinforcement learning from human feedback was groundbreaking because it provided a way to guide AI behavior toward being helpful and truthful.

Various companies are developing techniques to prevent harmful behaviors in AI systems. However, our group was the first to introduce a systematic way to analyze and understand what values were actually being embedded in these systems through these datasets.

What’s next

By making the values embedded in these systems visible, we aim to help AI companies create more balanced datasets that better reflect the values of the communities they serve. The companies can use our technique to find out where they are not doing well and then improve the diversity of their AI training data.

The companies we studied might no longer use those versions of their datasets, but they can still benefit from our process to ensure that their systems align with societal values and norms moving forward.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
AI datasets have human values blind spots: New research (2025, February 6)
retrieved 6 February 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Personalized social media features could help users manage time and well-being
Tech

Personalized social media features could help users manage time and well-being

Credit: CC0 Public Domain Redesigning social media to suit different needs of...

Is the Galaxy S25 Edge ready for its debut? Samsung sets May 12 for virtual Galaxy Unpacked
Tech

Is the Galaxy S25 Edge ready for its debut? Samsung sets May 12 for virtual Galaxy Unpacked

Samsung’s next Galaxy Unpacked is a virtual-only affair on May 12, 2025...

This tiny 9 box has more power than your full-size PC – and it runs 8K games with ease
Tech

This tiny $829 box has more power than your full-size PC – and it runs 8K games with ease

Aoostar GT37 mini PC delivers 12-core performance, 80 TOPS of AI, and...

Automated tool offers real-time feedback for English pronunciation among non-native speakers
Tech

Automated tool offers real-time feedback for English pronunciation among non-native speakers

Credit: Nothing Ahead from Pexels A new system that improves on the...