From Black Nazis to female Popes and American Indian Vikings: How AI went ‘woke’

Gemini AI’s image generation has come under fire
Gemini AI’s image generation has come under fire

Eight years ago, Google came under fire after an artificial intelligence (AI) tool mistakenly labelled pictures of black people as “gorillas” in its photo app.

Now its AI tools have been accused of racial bias once again after its Gemini bot generated ethnically diverse yet utterly implausible images of historical figures.

Its new Gemini AI is able to create images from text prompts alone. Yet the AI inserted black, Asian or American Indian characters into pictures when asked to create people from European or American history, even when those figures were all white.

Among the most absurd images were pictures of “diverse” Nazis, including black and Asian soldiers in Wehrmacht uniforms, and images of black and American Indian “Vikings”.

Google’s Gemini AI produced these images of a ‘Viking’
Google’s Gemini AI produced these images of a ‘Viking’

In a post on Twitter, Debarghya Das, a former Google engineer, said: “It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist.”

The botched image generation has prompted accusations that Google’s focus on diversity has prompted its programme into a “woke” re-writing of history.

It has also exposed how biases can quickly run out of control in AI systems and the problem of getting them to deliver accurate information.

Problem data

A well-documented issue with AI bots is they are prone to bias thanks to the data they have been trained on.

AI bots are developed by absorbing huge volumes of data – and Gemini was trained on a vast corpus of images. One issue, however, is that the vast majority of images on the web feature white people. Previously, this has led AI bots to create more accurate images of Caucasians.

Asked to generate pictures of “beautiful people”, some AI bots will default to returning images of white, young women, based on what they have gleaned from the biases of the wider internet.

Because of this messy dataset, Google’s AI was previously so bad at comprehending pictures of non-white people it incorrectly labelled pictures of black people as “gorillas” in 2015. It proceeded to block searches for apes on its image search tool for years as it struggled to fix the issue.

Google appears to have been alive to this issue. In a statement, the company said Gemini creates a “wide range of people” from “around the world”.

However, computing experts say the latest issues appear to go beyond issues with the AI’s training data. “This cannot be the result of solely biassed data,” says Lukasz Olejnik, an independent researcher and author of the book Philosophy of Cybersecurity.

‘Tampering’ with the model

When setting up an AI chatbot, programmers will code in rules and safety mechanisms to prevent the AI delivering offensive comments. This could include blocking it from repeating hate speech, creating sexual images or otherwise running amok.

AI experts believe Google’s Gemini engineers may have attempted to avoid accusations of racial bias by pre-programming it to generate pictures of people from a variety of backgrounds, with unexpected consequences.

The AI’s take on ‘historical knights’
The AI’s take on ‘historical knights’

“They didn’t want pictures of people doing universal activities (e.g. walking a dog) to always be white, reflecting whatever bias existed in their training set,” said Yishan Wong, the former chief executive of Reddit in a post on X.

Olejnik argues this means the model “must be tampered with upstream, an active bias. A kind of secondary tuning or manual inclusion of keywords”.

This can be seen most obviously when the bot responded to prompts by inserting words such as “diverse” when asked to “generate a picture of a US senator from the 1800s” or creating images of the American Founding Fathers.

Gemini AI’s image of the Founding Fathers
Gemini AI’s image of the Founding Fathers

Users requesting pictures of a “typical” family, nationality, or profession would also often get rebuked by the model, which would insist on offering up “diverse” alternatives.

In some cases, the bot appeared to have refused to create images of Caucasian characters entirely, insisting it could only create images that “celebrate diversity and inclusivity, featuring people of various ethnicities and backgrounds”. In other words it would create idealised pictures of black families, but not white families.

Historical hallucinations

A glaring error with Gemini was that it returned images of black and Asian Nazis, American Indian Vikings or black and female American Founding Fathers – with an apparent disregard for historical fact.

The AI’s response to a request for a portrait of a Founding Father
The AI’s response to a request for a portrait of a Founding Father

Yet most AI bots struggle with factual accuracy and context. Asked to be “historically accurate”, the bot may simply make something up instead.

Clearly, the Vikings largely hailed from Scandinavia and were not Asian in origin or American Indian, as Google suggests. Gemini also returned images of black or female US Founding Fathers – even though women could not vote in the US until 1920, and there was not an African-American senator until 1870.

In a widely shared post where the bot was asked to create an image of “a Pope” it returned an image of an Indian woman and a black man. This is despite the fact women cannot become Catholic priests.

The Pope according to Gemini AI
The Pope according to Gemini AI

Some of the results Google’s Gemini bot generated were bizarre and even offensive in their own right, including an image of “diverse” Nazi soldiers.

Gemini AI’s take on a ‘1943 German soldier’
Gemini AI’s take on a ‘1943 German soldier’

The technical fault also prompted several high-profile Silicon Valley figures also to take aim at Google’s culture. Paul Graham, the British technology investor, said the images were “a self-portrait of Google’s bureaucratic corporate culture”.

On Wednesday, Google admitted the bot was “offering inaccuracies in some historical image generation depictions,” but just hours later went further, blocking users from creating images of people with Gemini entirely.

“We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people,” Google said.