Tech

Gender, nationality can influence suspicion of using AI in freelance writing

Share
Share
writer
Credit: CC0 Public Domain

With the development of AI writing assistants like ChatGPT and Microsoft Copilot, large language models (LLMs) are now used in various writing professions to generate ideas and work more efficiently.

But are there negative associations or potential professional backlash for writers wrongfully (or rightfully) suspected of using AI? Does this suspicion vary based on the writer’s race, gender or nationality?

A new study by researchers at Cornell Tech and the University of Pennsylvania shows freelance writers who are suspected of using AI have worse evaluations and hiring outcomes. Freelancers whose profiles suggested they had East Asian identities were more likely to be suspected of using AI than profiles of white Americans. And men were more likely to be suspected of using AI than women.

“We have known for a long time that suspicion of using AI caused reduced evaluation of trustworthiness for the person allegedly using it, and also that people are not good at detecting AI use,” said co-author Mor Naaman, Don and Mibs Follett Professor at Cornell Tech. “Since a long line of research shows that biased evaluations and outcomes are common in the workplace and other settings, we wanted to understand how these AI perceptions may differ based on the gender, race and nationality of the person producing the content.”

The team presented their work, “Generative AI and Perceptual Harms: Who’s Suspected of using LLMs?” at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI 2025), April 26–May 1 in Yokohama, Japan.

The researchers conducted online experiments to investigate the effects of perceived AI use by freelance writers. The researchers asked participants to evaluate the writers’ social media profiles, assessing whether participants suspected the writers of using AI, the perceived quality of the writers’ work, and how likely the participants would be to hire the writers.

Unbeknownst to participants, the writers were all fictional and their made-up names and pictures suggested they belonged to different demographic groups based on gender, race and nationality.

The study found that writers suspected of using AI received lower quality evaluations and suffered a decreased likelihood of hiring across all demographic groups. However, participants in the study were more likely to suspect writers from certain groups of using AI, exhibiting some underlying biases.

Notably, freelance profiles that were suggestive of East Asian identities were more likely to be suspected of using AI than profiles of white Americans. However, there were no differences in quality evaluations or job outcomes for these profiles.

In addition, men were more likely to be suspected of using AI than women, though the researchers found no differences in the quality evaluations or job outcomes for these groups once the suspicion is accounted for.

“We believe that this gender difference might be due to stereotypes about the use of technology: the belief that men are more likely to use AI, or are perceived as willing to cheat,” said Naaman, who is a professor at the Jacobs Technion-Cornell Institute and the Cornell Ann S. Bowers College of Computing and Information Science.

In their paper, the researchers call for more investigation into the potentially harmful effects of AI perceptions on different demographic groups.

“If you are suspected of using AI, your outcomes will be worse,” Naaman continued. “It is essential to keep studying how these perceptions impact various groups and to develop strategies to mitigate these harms.”

The study’s co-authors are assistant professor Danaé Metaxa of the University of Pennsylvania and Kowe Kadoma, a Cornell Tech Ph.D. student in the field of information science, who helped come up with the idea for the study.

“In casual conversations with friends and family, someone would mention that they suspected an email or message to be AI-generated. I then became curious about when people suspect AI writing and if some people are praised for using AI tools while others are penalized for it,” said Kadoma. “As more people adopt AI technologies, we need to consider who might benefit and who might be disadvantaged. The technology is changing rapidly, and so are the norms around its use.”

More information:
Abstract: programs.sigchi.org/chi/2025/p … ogram/content/188472

Provided by
Cornell University


Citation:
Gender, nationality can influence suspicion of using AI in freelance writing (2025, May 8)
retrieved 8 May 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
IT workers secretly use unauthorized AI tools as companies scramble to control rising shadow tech chaos
Tech

IT workers secretly use unauthorized AI tools as companies scramble to control rising shadow tech chaos

Nearly 40% of IT workers admit to secretly using unauthorized generative AI...

Pinterest’s new AI tools help you shop by visuals and vibes
Tech

Pinterest’s new AI tools help you shop by visuals and vibes

Pinterest has debuted a set of new AI-powered visual search tools The...