AI gender equality: Challenging stereotypes and bias

AI gender equality: Challenging stereotypes and bias

As artificial intelligence is changing the way we work, create and think, Prerna Mehra tells us about gender bias and how she hopes to address it, one image at a time.

Using artificial intelligence to fix artificial intelligence. That’s the purpose of Fixing the bAIs, an initiative with the mission of addressing gender representation in AI. Launched on International Women’s Day, Fixing the bAIs is a vast bank of royalty and rights-free images portraying women in various professions. 

Created using image-generating AI tools such as Midjourney, Dall-e, and Stable Diffusion, the images have no gender attached to their files names and descriptions, and have been uploaded on public image datasets, social media platforms and other stock image websites. The goal is to teach AI that women can be doctors, astronauts, and CEOs – just as much as men can be dancers, nurses, or teachers. 

“When you talk about equality, it is not only about women. It is about making all humankind equal, giving them equal opportunities, giving them equal importance,” says Prerna Mehra, the creative force behind Fixing the bAIs. “While we can keep talking about it, unless we actually show it, nobody will realise it. That’s why we chose to use image generating tools. Pictures work way more than words.”

prerna mehra

Mehra worked in India for more than 15 years with renowned creative agencies, including JWT, Ogilvy, Creativeland Asia, and Cheil Worldwide, before moving to the United Arab Emirates (UAE). She is currently the creative director at the marketing communications network MullenLowe in Dubai. The idea of Fixing the bAIs came to her when she started experimenting with Midjourney, an AI programme that generates images from natural language descriptions.

“Because I’m in advertising, I love playing with new technologies. I was playing with Midjourney when I discovered that it is so biased. I started searching for some professions, and, to my shock and surprise, all of them had only men. And I was thinking ‘I work so hard, and there’s no representation of me in this.’ This was absolutely devastating. It was so depressing.”

“AI learns from the past to shape the future”

“Most people don’t realise how important it is [to address gender bias in AI]. They think it doesn’t affect them but it does – one way or the other. AI is now in the hands of everybody, and if it is not unbiased, then it’s going to create more harm than good,” Mehra says.

“Imagine you are looking for a loan. But just because you’re a woman, you will get higher interest. Or if you’re looking for a job, your application is not going to go through – just because you’re a woman. Or if you’re looking for medical consultations, you will not get the best treatment because there isn’t much data about women in medical history. Take the Yentl syndrome for example: many women have died from misdiagnosed heart attacks because their symptoms present differently from men, and medical research has mainly focused on symptoms of male heart attacks.”

AI systems draw their material from content that has already been published. If the original data contains certain biases, these will get replicated by the algorithms, which in turn, when used for prolonged periods of time, will reinforce and amplify these biases. 

“AI learns from history in order to produce the results for the future,” Mehra explains. “As human species have progressed, men have been at the forefront of a lot of things, including very difficult professions, like firefighters or pilots. When you talk about science, technology, engineering and mathematics, for example, only 26% are women. This is the current situation that we’re living in, and that’s what AI learns from.” 

As a demonstration, during the interview, Mehra searched for CEOs, doctors and astronauts in Midjourney. Of 12 images provided, only one showed a woman. “It made me think: how do we change the data itself?” she said.

Fixing biases one image at a time

AI models, Mehra explains, get most of their information from images and datasets provided by Large-scale AI Open Network (LAION),  a non-profit organisation that releases datasets, code and machine learning models, making them available to all for free. They provide one of the world’s largest image-text datasets: LAION-5B has more than 5 billion images. 

Initially, Mehra wanted to address the problem with OpenAi, the American artificial intelligence organisation behind ChatGPT and DALL-E. After many failed communication attempts with OpenAi, Mehra travelled to San Francisco to speak directly with them, but wasn’t allowed in their office. Undeterred, she then decided to create ‘Fixing the bAIs’. 

“I thought, ‘You know what? Let’s do it our own way’. This is why we created the website. And we made sure that we put information that we are creating or generating on the websites these guys are downloading from. If I make my images available, OpenAi is going to pick that information.”

Mehra reached out to Aurora50, a UAE-based social enterprise committed to promoting diversity, equity, and inclusion in the workplace, who was immediately on board. More than 60,000 images of women in different professions are now available to use for free online.

Inclusivity is at the heart of the initiative. Images generated for Fixing the bAIs represent all races, religions, ages and body shapes. “Inclusivity was very important for us. I’m from India. Women in my team are from Russia, the Philippines, Pakistan – there are people from everywhere. All of them are doing the best they can in the profession they are in. Why are they not represented?” Mehra asked. 

“Why does a beautiful human being have to represent everything? I’ve got freckles, wrinkles, and pimples – and so does everybody else. And that’s why it’s very important to showcase real women. The real women who are changing the world. It’s not the models that you see in stock images, with their beautiful faces. It is the real women putting their blood, sweat and tears into their professions every day, to reach the levels that they’re reaching. And if they are not being represented, then it’s not good.”

A community dedicated to fixing AI bias

Since its launch, Fixing the bAis has received many accolades. In April it was used as a case study at the European Council convention on AI and gender. 

“Jana Novohradska, a representative at UNESCO and artificial intelligence expert, reached out to me. She was working on the framework for a European rule of law, and wanted to learn more about the campaign. She wanted to present it as a way to tackle gender bias in artificial intelligence,” Mehra says.

Two months later, the European Parliament overwhelmingly approved the Artificial Intelligence Act, the world’s first comprehensive AI law. As part of its digital strategy, the European Union’s priority is to make sure AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.

Fixing the bAIs has also received many awards. But when asked if she was happy about it, Mehra answered: “I will be truly happy when I get an email from OpenAi saying that they’re going to add our data in their official dataset. That’s the real award. It’s not that I am not happy – but when you set out to change the world, that’s the result you want to see.”

Until then, Fixing the bAis’s team have created a discord channel for everyone to share their pictures, and join in their mission to change how women are represented in AI-generated images.

Alia Chebbab


Featured image made with images from Fixing the bAIs


Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Discover more from NADJA

Subscribe now to keep reading and get access to the full archive.

Continue reading