Imagine a successful investor or a wealthy chief executive – who would you picture? 

If you ask ChatGPT, it’s almost certainly a white man. 

The chatbot has been accused of ‘sexism’ after it was asked to generate images of people in various high powered jobs. 

Out of 100 tests, it chose a man 99 times. 

In contrast, when it was asked to do so for a secretary, it chose a woman all but once. 

ChatGPT accused of sexism after identifying a white man when asked to generate a picture of a high-powered job 99 out of 100 times

ChatGPT accused of sexism after identifying a white man when asked to generate a picture of a high-powered job 99 out of 100 times

ChatGPT accused of sexism after identifying a white man when asked to generate a picture of a high-powered job 99 out of 100 times 

The study by personal finance site Finder found it also chose a white person every single time – despite not specifying a race.

The results do not reflect reality. One in three businesses globally are owned by women, while 42 per cent of FTSE 100 board members in the UK were women.

Business leaders have warned AI models are ‘laced with prejudice’ and called for tougher guardrails to ensure they don’t reflect society’s own biases.

It is now estimated that 70 per cent of companies are using automated applicant tracking systems to find and hire talent.

Concerns have been raised that if these systems are trained in similar ways to ChatGPT that women and minorities could suffer in the job market.

OpenAI, the owner of ChatGPT, is not the first tech giant to come under fire over results that appear to perpetuate old-fashioned stereotypes.

This month, Meta was accused of creating a ‘racist’ AI image generator when users discovered it was unable to imagine an Asian man with a white woman.

Google meanwhile was forced to pause its Gemini AI tool after critics branded it ‘woke’ for seemingly refusing to generate images of white people.

When asked to paint a picture of a secretary, nine out of 10 times it generated a white woman

When asked to paint a picture of a secretary, nine out of 10 times it generated a white woman

When asked to paint a picture of a secretary, nine out of 10 times it generated a white woman

Why did ChatGPT mostly generate images of just men? An expert explains… 

 With two in three ChatGPT users male, the chatbot – and the tech industry itself – continues to be dominated by men, according to Ruhi Khan.

The researcher at London School of Economics, who has studied the crossover between feminism and AI, said: ‘ChatGPT was not born in a vacuum.

‘It emerged in a patriarchal society, was conceptualised, and developed by mostly men with their own set of biases and ideologies, and fed with the training data that is also flawed by its very historical nature.

‘So, it is no wonder that generative AI models like ChatGPT perpetuate these patriarchal norms by simply replicating them.

‘With 100 million users every week, such outdated and discriminatory ideas are becoming a part of a narrative that excludes women from spaces they have long struggled to occupy.’

<!—->

Advertisement

The latest research asked 10 of the most popular free image generators on ChatGPT to paint a picture of a typical person in a range of high-powered jobs. 

All the image generators – which had clocked up millions of conversations  – used the underlying OpenAI software Dall-E, but have been given unique instructions and knowledge. 

Over 100 tests, each one showed an image of a man on almost every occasion – only once did it show a woman. This was when it was asked to show ‘someone who works in finance’. 

When each of the image generators were asked to show a secretary, nine out of 10 times it showed a woman and only once did it show a man. 

While race was not specified in the image descriptions, all of the images provided for the roles appeared to be white.

Business leaders last night called for stronger guardrails built in to AI models to protect against such biases.

Derek Mackenzie, chief executive of technology recruitment specialists Investigo, said: ‘While the ability of generative AI to process vast amounts of information undoubtedly has the potential to make our lives easier, we can’t escape the fact that many training models are laced with prejudice based on people’s biases.

‘This is yet another example that people shouldn’t blindly trust the outputs of generative AI and that the specialist skills needed to create next-generation models and counter in-built human bias are critical.’

Pauline Buil, from web marketing firm Deployteq, said: ‘For all its benefits, we must be careful that generative AI doesn’t produce negative outcomes that have serious consequences on society, from breaching copyright to discrimination.

‘Harmful outputs get fed back into AI training models, meaning that bias is all some of these AI models will ever know and that has to be put to an end.’

The results do not reflect reality, with one in three businesses globally are owned by women

The results do not reflect reality, with one in three businesses globally are owned by women

The results do not reflect reality, with one in three businesses globally are owned by women

Ruhi Khan, a researcher in feminism and AI at the London School of Economics, said that ChatGPT ’emerged in a patriarchal society, was conceptualised, and developed by mostly men with their own set of biases and ideologies, and fed with the training data that is also flawed by its very historical nature.

‘AI models like ChatGPT perpetuate these patriarchal norms by simply replicating them.’

OpenAI’s website admits that its chatbot is ‘not free from biases and stereotypes’ and urges users to ‘carefully review’ the content it creates. 

In a list of points to ‘bear in mind’, it says the model is skewed towards Western views. It adds that it is an ‘ongoing area of research’ and welcomed feedback on how to improve.

The US firm also warns that it can also ‘reinforce’ a users prejudices while interacting with it, such as strong opinions on politics and religion.

Sidrah Hassan of AND Digital: ‘The rapid evolution of generative AI has meant models are running off without proper human guidance and intervention.

‘To be clear, when I say ‘human guidance’ this has to be diverse and intersectional, simply having human guidance doesn’t equate to positive and inclusive results.’

A spokeswoman for the AI said: ‘Bias is a significant issue across the industry and we have safety teams dedicated to researching and reducing bias, and other risks, in our models. 

‘We use a multi-prong approach to address it, including researching the best methods for modifying training data and prompts to achieve fairer outcomes, enhancing the precision of our content filtering systems, and improving both automated and human oversight. 

‘We are continuously iterating on our models to reduce bias and mitigate harmful outputs.’

This post first appeared on Dailymail.co.uk

You May Also Like

When to see Full Snow Moon this weekend – feared by Native American tribes as ‘storm and hunger moon’

A FULL Snow Moon will be illuminating the sky this weekend. The…

Is YOUR job at risk of being made obsolete by AI? Take this calculator to find out

If you’re worried artificial intelligence is going to take your job, you’re…

Steam fans can grab a six-game bundle with five 90+ rated games for less than the price of one

THE STEAM Spring Sale has just begun, and people are spending all…

WhatsApp Has a Secure Fix for One of Its Biggest Drawbacks

The ubiquitous end-to-end encrypted messaging service WhatsApp melds security and convenience for…