Voices: Black Nazis and a Willy Wonka that traumatises children: Google’s AI problem should worry us all

Voices: Black Nazis and a Willy Wonka that traumatises children: Google’s AI problem should worry us all

When movies give us a worst-case-scenario glimpse of the future of artificial intelligence, they tend to focus on the big picture stuff: roving gangs of human-exterminating robots. A Big Brother superintelligence that watches you use the bathroom. Haley Joel Osment gets frozen in ice. That sort of thing.

But the real danger – at least in the immediate instance – is that our over-reliance on AI is going to fundamentally alter the way we approach basic tasks, particularly those that are creative or research-based. Similar to how studies have suggested that our increased use of search engines like Google has changed the way we recall and store information in our own brains, our use of AI could completely transform the way we learn, and go about our work – and not for the better.

We’ve already seen plenty of instances of the pitfalls of AI – I’ve written about a fair few myself – but one of the more recent examples is perhaps most illustrative of how relying on this technology could lead to serious problems in the future. In response to the popularity of ChatGPT, Google has launched its own AI system, Google Gemini. As well as responding to text prompts, Google’s AI can also generate images based on basic instructions.

However, users quickly found an issue with Gemini’s ability to generate images: when asked to generate pictures of human subjects, it would often make those subjects as ethnically and gender diverse as possible. While that may not sound like a big deal – and could even be viewed as a good thing, considering previous language models’ tendency towards bias and discrimination, such as early versions of Stable Diffusion producing sexually explicit images when given the prompt “a Latina” – this led to some baffling outputs.

If asked to generate a picture of, say, the founding fathers of the United States – a famously male, famously pale group – it might generate a picture of men and women of all races in colonial dress. While that sounds fairly harmless, and even funny, real problems start to emerge when the system is asked to generate images of World War Two, and starts pumping out pictures of Black and Asian Nazi soldiers.

While it isn’t clear why exactly Gemini has gone woke in the worst imaginable way, it’s possible that in trying to avoid issues of potential bias, Google had programmed its AI system to be inclusive by default, without really thinking about how that could go wrong (inclusivity being, generally speaking, a good thing). Prabhakar Raghavan, senior vice president at Google, suggested this may be the case, writing in a blog post that “our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range”.

Whatever the reason, the main takeaway from the episode is clear: this technology is not ready yet. Moreover, there’s a possibility that it will never be ready – or, at least, may never reach the point that it can be fully relied on to produce consistent, dependable results.

We saw a particularly striking example of the dangers of over-relying on AI this week, and it happened in the unlikeliest of places – a sketchy warehouse full of drama students somewhere in Glasgow.

If you haven’t seen the many viral videos and first-hand accounts yet, here’s all you need to know: parents in Scotland paid £35 to take their children to an event advertised as an interactive “Willy Wonka experience”. When they arrived, they were greeted by a sparsely decorated warehouse full of bored actors who didn’t seem to know what they were doing and, worst of all, not a scrap of chocolate in sight. The event predictably descended into chaos, with reports that parents “rioted” over the poor-quality, way-too-expensive day out.

The most interesting part of the story, though, is that the organisers seemed to rely on AI for certain aspects of the “experience”. Promotional materials on Facebook were AI-generated, and so poorly done that the text on them was complete gibberish. Actors were handed script pages full of AI “nonsense”, and when they asked how certain things were going to work logistically they were told to “improvise”.

The whole incident works as a sort of case study of the inherent dangers in people’s approach to this technology. Sure, it doesn’t sound like there was ever going to be a world where the organisers provided a really well-thought-out immersive experience for these kids, but they clearly were under the impression that they could leave large parts of the work to a brand-new technology that they didn’t really understand how to use.

That’s the thing: it’s not necessarily that AI itself is bad. But people will glom on to shiny new toys and implement them in their daily lives without a second thought. To the average person, their smartphone may as well have been constructed by witches for all they know. How often do we just click on the top link of a Google search and take it that that’s the “best” result for what we need? How many times have you bought something on Amazon because Siri recommended it to you?

People are extremely eager to give over huge parts of their lives to AI without really knowing if it’s ever going to be in a position to take on those duties. Sure, for now it’s just a dodgy kids’ show (and to be fair, if you pay £35 for a mystery day out in a spooky warehouse, that’s on you), but is this really a million miles away from what Hollywood wants to do with AI scripts and actors? Tyler Perry just announced that he’s putting a stop to an $880m studio expansion because he believes OpenAI’s new video generation tool will be able to save him money. Is this the same AI that thinks George Washington is a Chinese lady? Good luck with that, Mr Perry.

AI could be the next big technological leap forward, whether we like it or not. But we have to be careful, before we get there, not to give ourselves over to it completely before it’s even ready.