You can’t believe a word any of these people is saying – that’s the ‘deep fake’ era for you

<span>Composite: AP/Getty</span>
Composite: AP/Getty

We are entering an age in which you can no longer trust your ears or eyes. Bots, trolls and fake news merchants have demolished the idea that you can believe what you read online. But audio and video always felt like truth’s life raft, offering an accurate portrayal of reality we could cling to.

Not for much longer. Forget post-truth, this is the era of post-reality, where “deep fake” software will allow anyone to create believable video footage of anyone saying anything. Last week, some artists published a convincing deep fake of Facebook chief, Mark Zuckerberg, saying: “The more you express yourself, the more we own you.” A couple of months ago, Barack Obama was out there telling viewers to “stay woke, bitches”, while before that the young anti-gun activist Emma González was faked into tearing up the US constitution, to the misplaced fury of some Republicans.

There are hundreds of these videos circulating, with more every day. The US House intelligence committee, worried by the possible national security risks, met to discuss this last week.

Related: Doctored video of sinister Mark Zuckerberg puts Facebook to the test

In some ways, deep fakes aren’t all that new: the selective editing and clipping of real footage to create a falsehood, a “shallow fake”, you could say, is already the staple of conspiracy theorists and even the odd respectable news outlet. And large-scale political fakery is as old as the hills: your grandparents might remember the “Zinoviev letter”, a 1924 forgery published by the Daily Mail that was purportedly from the Soviet Comintern, asking the British Communist party to engage in sedition.

The difference now is that it is cheaper, easier, quicker and done far better. Deep fakes still aren’t perfect, but digital technologies improve rapidly. They were originally designed to improve movie production, but within five years what was once the million-dollar CGI preserve of Hollywood studios will be a free app on your phone.

It’s easy to imagine what happens then. Fake but perfect videos will circulate of Donald Trump saying he’s secretly a member of the Ku Klux Klan or that George Soros is funding an anti-democratic coup. You will see the announcement of a nuclear strike – leaked on Twitter. A supposedly “private recording” where a candidate admits to colluding with a foreign government will fly around the world faster and farther than any press-released rebuttal. No doubt a future Cambridge Analytica will be paid to make sure these videos reach certain key swing voters the day before an election.

The possibilities are especially dangerous in countries with existing ethnic or religious tensions and less experience in dealing with digital literacy. In India, simple faked images and videos of alleged child kidnappings have led to lynchings, while in Gabon rumours about a deep fake video of President Ali Bongo created a political crisis. The problem won’t be restricted to the rich and famous – you too might be the subject of a deep fake. There are chatrooms in seedy digital corners where developers will make fake sex tapes of anyone you ask them to – an ex, an annoying colleague? – for around $20. (Like many pioneering technologies, deep fakes are being popularised via pornography.)

There is already a counter-movement: academic conferences, the US military and Facebook researchers are all involved in an arms race, trying to build fraud-spotting tech. (Literally in some cases: one technique involves looking for unusual arm gesticulation.) This is vital work – perhaps the most important technological task of the next 10 years – but it’s only part of the answer. Deep fakes’ greatest strength is not technological, but our willingness to believe and click “share” for any old nonsense so long as it fits in with our pre-existing views about the world. You might assume that deep fakes mean everyone will believe everything they see, but the real risk to democracy is the opposite: no one will believe anything at all. If everything is potentially fakable, everything can be dismissed as lie. A leading politician caught lying on camera? It’s a deep fake! At the first hint of a crisis, “previously unseen footage” will emerge, while old speeches will be edited with expedient content inserted.

The main effect of deep fakes in our politics therefore will not be to spread lies, but, rather, confusion and apathy. Authoritarians here and abroad must be thrilled. Over the past few years, they have developed a new technique of censorship by distraction, smothering truths under a torrent of meaningless rubbish. They will soon be able to do this automatically, pumping out millions or even billions of pieces of content to keep everyone suitably confused.

As the political scientist Hannah Arendt wrote in the 1950s, the ideal subject of an authoritarian regime is not a committed Nazi or Bolshevik, but someone for whom “the distinction between fact and fiction, true and false, no longer exists”, because they are far more malleable.

The health of democracies all over the world will depend on finding ways to re-establish the veracity of video and audio content – and temper our own willingness to believe or disbelieve according to our own prejudices. And if we can’t? In the face of constant and endless deep fakes and deep denials, the only rational response from the citizen will be extreme cynicism and apathy about the very idea of truth itself. They will conclude that nothing is to be trusted except her own gut instinct and existing political loyalties.

In other words, the age of deep fakes might even succeed in making today’s visceral and divided politics look like a golden age of reasonableness.

• Jamie Bartlett is the author of The People vs Tech