Tech

New AI framework aims to remove bias in key areas such as health, education and recruitment

Share
Share
New AI framework aims to remove bias in key areas such as health, education, and recruitment
Local optimal solutions obtained using the decision tree as underlying classifier for the Adult problem in validation (left) and test (right) sets. Gray dots represent all the solutions found by the meta-algorithm across all the runs, whereas orange dots represent the average Pareto front (Color figure online). Credit: Machine Learning (2025). DOI: 10.1007/s10994-024-06721-w

Researchers from the Data Science and Artificial Intelligence Institute (DATAI) of the University of Navarra (Spain) have published an innovative methodology that improves the fairness and reliability of artificial intelligence models used in critical decision-making. These decisions significantly impact people’s lives or the operations of organizations, as occurs in areas such as health, education, justice, or human resources.

The team, formed by researchers Alberto García Galindo, Marcos López De Castro and Rubén Armañanzas Arnedillo, has developed a new theoretical framework that optimizes the parameters of reliable machine learning models. These models are AI algorithms that transparently make predictions, ensuring certain confidence levels. In this contribution, the researchers propose a methodology able to reduce inequalities related to sensitive attributes such as race, gender, or socioeconomic status.

The work is published in the journal Machine Learning. It combines advanced prediction techniques (conformal prediction) with algorithms inspired by natural evolution (evolutionary learning). The derived algorithms offer rigorous confidence levels and ensure equitable coverage among different social and demographic groups. Thus, this new AI framework provides the same reliability level regardless of individuals’ characteristics, ensuring fair and unbiased results.

“The widespread use of artificial intelligence in sensitive fields has raised ethical concerns due to possible algorithmic discriminations,” explains Armañanzas Arnedillo, principal investigator of DATAI at the University of Navarra.

“Our approach enables businesses and public policymakers to choose models that balance efficiency and fairness according to their needs, or responding to emerging regulations. This breakthrough is part of the University of Navarra’s commitment to fostering a responsible AI culture and promoting ethical and transparent use of this technology.”

Application in real scenarios

Researchers tested this method on four benchmark datasets with different characteristics from real-world domains related to economic income, criminal recidivism, hospital readmission, and school applications. The results showed that the new prediction algorithms significantly reduced inequalities without compromising the accuracy of the predictions.

“In our analysis, we found, for example, striking biases in the prediction of school admissions, evidencing a significant lack of fairness based on family financial status,” notes Alberto García Galindo, DATAI predoctoral researcher at the University of Navarra and first author of the paper.

“In turn, these experiments demonstrated that, on many occasions, our methodology manages to reduce such biases without compromising the model’s predictive ability. Specifically, with our model, we found solutions in which discrimination was practically completely reduced while maintaining prediction accuracy.”

The methodology offers a “Pareto front” of optimal algorithms, “which allows us to visualize the best available options according to priorities and to understand, for each case, how algorithmic fairness and accuracy are related.”

According to the researchers, this innovation has vast potential in sectors where AI must support reliable and ethical critical decision-making. Garcia Galindo points out that their method “not only contributes to fairness but also enables a deeper understanding of how the configuration of models influences the results, which could guide future research in the regulation of AI algorithms.”

The researchers have made the code and data from the study publicly available to encourage further research applications and transparency in this emerging field.

More information:
Alberto García-Galindo et al, Fair prediction sets through multi-objective hyperparameter optimization, Machine Learning (2025). DOI: 10.1007/s10994-024-06721-w

Provided by
Universidad de Navarra


Citation:
New AI framework aims to remove bias in key areas such as health, education and recruitment (2025, February 18)
retrieved 18 February 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Microsoft is making all new accounts passwordless by default
Tech

Microsoft is making all new accounts passwordless by default

New Microsoft accounts will use passkeys by default, company reveals Existing users...