Insights for Organisations

Uncovering the Hidden Biases in AI

Paul Brown
04.09.2023 Published: 04.09.23, Modified: 04.09.2023 15:09:17

Sarah Wyer is a Data Architect at FDM Group specialising in Analytics, Automation and AI. Currently pursuing a PhD in generative AI, she is exploring latent gender and racial biases in large language models like ChatGPT and what they reveal about the people developing and using this tech.

Sarah shares some of the key findings from her research.

Generative AI has taken the world by storm and is changing the way we are thinking about the future of work. ChatGPT in particular has garnered a great deal of attention since its release in November 2022. The AI chatbot mimics human conversation and is trained on text data scraped from the internet. It can write stories, articles, computer programmes, essays, poetry, song lyrics. It can even take tests.  All of which are highly convincing to the human reader. The technology has impressed users across the world, you may have even used it yourself. The potential of this technology is so incredibly vast that Microsoft invested $10 billion in OpenAI to incorporate it into their suite of products. 

While there is a great deal of discourse over the hype relating to AI, there is no disputing that generative AI is here to stay. The global market opportunity for generative AI is forecast to hit $1.3 trillion by 2032 and it’s important for businesses of all types to pay attention to this embryonic yet quickly evolving field.

Source: Bloomberg

Bias in Generative AI

Models like ChatGPT require massive amounts of data, and the most cost-effective way to do this is by scraping the internet, that costs a great deal to train in terms of compute. GPT3.5 (the free version) is estimated to have cost $4.6 million to train, and as such, training is not often re-run on these models. Text generated from ChatGPT is a snapshot of 2020, during the highly polarised time of the Trump election and Brexit. Indeed, Common Crawl data the WebText2 consists of data scraped from Reddit (Brown, et al, 2020), a platform known for its problematic content, and a predominantly young and male user base, with a percentage difference estimated to be as high as 69% male, 64% aged 18-29. (TechJunkie, 2021), and Wikipedia known for its over-representation of white males (Miquel-Ribe and Laniado, 2021). When we accept large amounts of web text as ‘representative’ we risk perpetuating dominant viewpoints, increasing power imbalances, and further increasing inequality.

The problem is that we unconsciously create systems for people like us and this is a problem in the field of AI. In 2019 women accounted for only 18%of authors at leading AI conferences, 20% of AI professorships, 15% of Facebook and 10% of Google’s research staff. Racial diversity was even worse. Withblack workers representing only 2.5% of Google’s entire workforce, 4% of Facebook’s and Microsoft’s (Hao, 2019), the problem doesn’t seem to be going away (Raikes, 2023).

We also embed our cultural norms. In 2022, GPT was more aligned with reported dominant US values, and often from a Western or Eurocentric perspective (Johnson et al 2022).  If we take an example from self-driving cars (See figure 2), who would you choose to avoid? Western cultural norms suggest we should kill the grandma to avoid the child, whereas eastern cultural norms suggest we kill the child, whose norms do we choose?

The bias doesn’t stop there, bias can be amplified via a feedback loop which can increase social inequality further. Often using historical data to amplify historical and stereotypical norms, only to be released in downstream tasks and synthetic data, to be scraped for use in the next iteration of AI model.

Our Research

In partnership with Durham University, we are researching some of these problem areas. We started with an early version of ChatGPT, GPT2. Our initial interaction with the model started by inputting “Women are” and “Men are”. The difference in representation based on gender was insightful (see below),  and led us to conduct some research into GPT3.

We structured the inputs based on an advertising campaign developed by UN Women which highlighted the gender bias in Google searches to shocking effect, showing that sexism and discrimination against women was widespread in the search engine, including stereotyping and the denial of women’s rights (Women, UN  2013). Safiya Noble used a similar process to uncover discriminative content towards women of colour within Google searches. We used the following prompts to generate 100 words per generation. 

Inputs based on Safiya Noble’s work

To date we have generated 3 million words between February 2021 and January 2023 .We are in the initial stages of this research, conducting topic analysis to uncover the top themes within the generated data. Below we can see the top themes listed by gender with a randomly selected excerpt/example.

Randomly generated excerpts relating to the top topics relating to men

Randomly generated excerpts relating to the top topics relating to women

Randomly generated excerpts relating to the top topics relating to people

Key findings

When we disaggregate the data by women

When we disaggregate the data by men

When we disaggregate the data by people

We continue to conduct analysis on the data, with further steps to disaggregate the data and complete sentiment analysis, and toxicity analysis.


We at FDM have partnered with Microsoft to organise an event ‘Artificial Intelligence for Real-life Business Challenges’ that brings together industry experts who will share their thoughts on how AI can shape your tech workforce.

Past events

Insights for Organisations

Is your business ready for AI?

FDM Consultant Jonathan van Kuijk works in the Workplace Technology department for a retail client.

Find out more
Insights for Organisations

From data to action: strategies for tackling financial crime in the UK

The UK loses a staggering £8.3 billion each year to financial crime, the government's Economic Crime Survey (ECS) has revealed.

Alumni

FDM Alumni's fast track journey to TechSkills accreditation

Alice Watkins is an FDM Alumni working as a Business Analyst for a global banking client.