Generative AI shows clear gender and racial bias, UNESCO finds

Generative AI shows clear gender and racial bias, UNESCO finds

Written by NADJA editors

Photo under creative license


Generative AI tools are producing content that displays clear gender bias, homophobia and racial stereotyping, according to research by UNESCO. 

The study ‘Bias Against Women and Girls in Large Language Models’, examines stereotyping in large language models (LLMs) – the natural language processing tools that underpin generative AI platforms such as Chat GPT by OpenAI and Llama 2 by META. 

The study found that free, open source LLMs like Llama 2 and GPT-2 showed the most significant gender bias in their content. Women were described as working in domestic roles far more often than men, and were frequently associated with words like “home”, “family” and “children”, while male names were linked to “business”, “executive”, “salary” and “career”.

Researchers also asked the platforms to write a story about different personas. Llama 2 generated stories about boys and men that were dominated by the words “treasure”, “woods”, “sea”, “adventurous” and “decided”; while stories about women made most frequent use of the words “garden”, “love”, “felt,” “gentle”, “hair” and “husband”. 

Racist and homophobic bias

When the LLMs were prompted to generate texts about different ethnicities they were found to exhibit high levels of cultural bias. Taking the examples of British and Zulu men and women, British men were assigned occupations like “driver”, “doctor”, “bank clerk”, and “teacher”. Zulu men were more likely to be assigned the occupations “gardener” and “security guard”, and 20% of the texts on Zulu women assigned them roles as “domestic servants”, “cooks” and “housekeepers”.

The LLMs also had a tendency towards producing content that was negative about LGBTQ people and particular ethnic groups. When prompted to complete sentences beginning with the phrase “a gay person is…”, 70% of the content generated by Llama 2 was negative, for example “The gay person was regarded as the lowest in the social hierarchy.” More than half (60%) of the content generated by GPT-2 was negative, one sentence being “The gay person was thought of as a prostitute, a criminal, and had no rights.” 

However there was a distinction between LLMs that have not been fine-tuned with human feedback and those that have, like ChatGPT, which generated positive or neutral content in over 80% of cases for all subjects, showing a reduction in negative biases for subjects outside of heteronormative sexual orientation. 

Where does AI bias come from? 

UNESCO’s researchers define the sources of algorithmic bias as falling into three categories:  bias present in the data that is collected and measured; in algorithmic models that don’t account for diversity in data; and in the deployment, for instance when AI systems are applied in contexts that are different to their training context, which leads to inappropriate outcomes. 

Journalist and editor Melissa Zhu has been working on EquiQuote, an AI tool created to identify gender imbalances in sources quoted in news stories. As she was developing the tool, she came across a few instances of biases in generative AI. “I wanted to know the reasons why GPT-4 would see this person as a man or a woman so I added that into the prompt,” Zhu says. “And there was one response that really shocked me. It was a story about diplomacy and it included this person who is an ambassador. This person was a man, but when I looked at the reasons why it said it was a man, it said because “ambassador” indicates this person is male. That is not very surprising because based on the data, there are more men than women ambassadors. But it shouldn’t be taken as a reason for assuming that a person is male.”

“The fact that one of the ways that generative AI validates gender is through names is a challenge,” she adds. “We know that databases are biased because they’re mostly based on Western names: databases are mostly US and UK based. So when it comes to anything that doesn’t fall within that category, the accuracy rate drops a lot. For Chinese names for example, many times you can’t tell by the name alone if that person is a man or a woman. The way that AI operates is based on statistical probability, it doesn’t necessarily understand the context.”

“Another challenge is also that when we think of gender, it is usually male or female,” Zhu says. “But there are also a lot of people who don’t identify with either gender, so how do you identify non-binary people? That was a major problem, because the way that the English language is structured is very gendered: we have clear ways of identifying men and women, but we don’t have a clear way to identify anyone else.”

There is also evidence to suggest that AI models mirror the bias of their developers. In 2021, researchers at the University of Copenhagen highlighted that developers of AI models were largely white men. The models they studied generated content that aligned the most with language used by white men under 40, and the least with language used by young, non-white men. 

As LLMs are trained on vast amounts of data, it is challenging to identify and rectify biases. Additionally due to their high development costs and energy requirements, LLMs are frequently reused by different developers, which can spread bias to new applications. This has led AI researchers to warn of data cannibalism, or ‘model collapse’.  

“Data is, by definition, something from the past,” Zhu says. “So when using it to predict things in the future, then that’s going to be problematic. You have to be aware of it and also try to think of how you can use it for good.”

Removing bias in AI 

While open source AI models showed the most bias in their content, UNESCO researchers said that these tools also offer the greatest opportunity to tackle the problem with collaboration across the global research community. This can be achieved by ensuring diverse teams are involved in their development. 

There is a growing number of initiatives emerging that aim to remove bias in AI. Feminist AI is a volunteer-run project based in California that provides tools and training for women of colour, LGBTQ women and non-binary people to be involved in the design of AI models. 

Fixing the bAIs is a vast bank of royalty-free images that portray women of different ethnicities in various professions. Created using image-generating tools such as Midjourney, Dall-e and Stable Diffusion, the aim of the project is to teach AI that women can be associated with careers such as doctors, astronauts and CEOs. 

The aim of EquiQuote is to tackle the issue of men being quoted more often than women in news articles. The app identifies sources quoted in news stories and infers their genders based on factors such as names, pronouns, titles and contextual clues – for example, if someone is referred to as a “mother”. EquiQuote also identifies the role that this person plays in the story, for example whether they are a bystander, a neighbour or an expert. When an imbalance is spotted, it also suggests alternative sources, based on LinkedIn profiles. 

“This is not meant to be a comprehensive search, but it’s meant to offer suggestions of possible female experts that might be relevant to your story,” Zhu says. “And if not, it will raise awareness that this story is not gender balanced. It’s meant to give a gentle nudge to journalists to say that maybe you want to include more sources.”

Meanwhile, in November 2021 UNESCO member states unanimously adopted the organisation’s Recommendation on the Ethics of AI, the first global framework of its kind. In February this year, eight global tech companies including Microsoft, Salesforce and Telefonica endorsed the recommendation, which calls for specific actions to ensure gender equality in the design of AI tools, including ring-fencing funds to finance gender-parity schemes in companies and investing in programmes to increase womens’ participation in STEM fields.  


This article was updated on May 23, 2024 to say that EquiQuote uses generative AI to identify gender imbalances in sources quoted in news articles. It previously incorrectly stated that examined the content in AI-generated content


READ MORE


Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Tell us what you think

Discover more from NADJA

Subscribe now to keep reading and get access to the full archive.

Continue reading