Google apologises after Gemini generates 'racially offensive' images
Google has apologised for “inaccuracies” in its AI amid claims that it was generating incorrect images of specific genders, races and historical figures.
Users have accused the company’s Gemini chatbot of over-correcting its pictures for risk of being labelled racist. For instance, a request asking for images of America's founding fathers turned up women and people of colour.
Google said it was pausing the software’s ability to generate images of people, after admitting that its attempts at creating a “wide range” of results missed the mark.
“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” said a statement posted by the firm on X on Wednesday afternoon.
“We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
Google, which is competing with OpenAI and Microsoft for AI supremacy, made Gemini freely available to the public in December. The bot can create pictures based on typed instructions.
In recent days, Gemini has come under fire for what users claim are inaccurate depictions of certain cultures and groups. The debate has been latched on to by right-wing social media accounts, who have previously accused big tech of harbouring a liberal bias.
"It's embarrassingly hard to get Google Gemini to acknowledge that white people exist," computer scientist Debarghya Das, wrote.
“The wokeness and the anti-Whiteness is *literally* built into their entire algorithm,” said an X account called “End Wokeness.”
Replying to the post, Elon Musk wrote: “Gemini is indeed just the tip of the iceberg. The same is being done with Google search.”
The issue of bias in generative AI is a complex one. Google previously parted ways with a top AI researcher who wrote a paper highlighting the potential harms caused by large language models - of which Gemini is the most recent example.
In her research, Timnit Gebru found that AI models trained on massive text datasets tend to inherit and amplify existing societal biases, leading to harmful stereotypes and discrimination in their output.
Back in 2015, Google was forced to apologise after its Google Photos algorithm mistakenly labeled pictures of black individuals as "gorillas.”
Google has developed guidelines for the “responsible” development of AI in which it details its efforts to identify and reduce potential biases. It claims it is using inclusive and representative datasets to minimise skewed outcomes, relying on diverse human testers to uncover unexpected biases, and constantly monitoring its systems to track fairness.