Algorithms outdo us. But we still prefer human fallibility | Rafael Behr

Teenage girl with shaved head texting with cell phone in cafe
‘Spotify observes a user’s listening habits and assembles a selection of recommendations.’ Photograph: Hero Images/Getty Images

There are no monuments to Ned Ludd. This may be because there is no certainty that the man whose name was adopted by insurrectionary textile workers at the start of the 19th century actually existed. But Robin Hood probably wasn’t a real person, and Nottingham has a statute of him. Doncaster named an airport after him. Copenhagen’s most famous landmark is a bronze mermaid. Fictionality is no bar to commemoration.

Ludd’s problem is not his mythic status but his association with futility. The original Luddite target was exploitative bosses, not the machines they introduced, but the word is now irreversibly linked to fear of technology – one of history’s enduringly lost causes. The martyred spirit of Ludd was invoked when the novelist Howard Jacobson warned last week that smartphones and social media were corroding attention spans, bleaching nuance from public discourse and killing literacy, with the result that within 20 years “we will have children who can’t read, who don’t want to read”. It is the vintage lament of an older generation, appalled by the shallow mores of youth. Similar charges were once laid against television, radio, newspapers and the novel.

Luddism refuses to die because each innovation creates a pool of people who feel economically or culturally dispossessed. The greater the leap forward, the wider the chasm of obsolescence. And the scale of the digital revolution defies hyperbole. No area of human activity is undisrupted. Every story to lead the news this summer contains a chapter on social media. The rise of the “alt-right” in the US cannot be narrated without the mustering of its acolytes on Facebook, nor can jihadist terrorism be explained without virtual recruitment. When the Crown Prosecution Service says it wants to prosecute more hate crime online, it is acknowledging that the boundary between digital and analogue experience is dissolving.

The depth of the change has been brought home to me by something less newsworthy: Spotify’s Discover Weekly playlist. The music streaming service observes a user’s listening habits and assembles a selection of recommendations. It is imperfect, but it is no worse a judge of what I may enjoy than old friends are. So much so, that I find myself ascribing intent to the algorithm. It is hard not to be offended when it chooses badly. What made you think I would like that, Spotify? You know me better than that.

This phenomenon was observed long before the internet was a thing. In 1944 two psychologists, Marianne Simmel and Fritz Heider, demonstrated it with crude animation. A circle and two triangles bob and weave around a rectangle with one side that opens like a door. It is impossible to watch the film without ascribing personality to the objects and organising their movements into a story. The bigger triangle becomes a baddie, bullying the other shapes around their “house”. Our brains bestow character and motive to anything that has the superficial appearance of autonomous agency. Consider the persistence of our affection for the domestic cat, when it exploits our sentimental nature to secure food and shelter. It is irrational to presume that cats are our conscious companions. And if robots were able to replicate a plausible simulation of conscious engagement, people would be practically powerless to avoid being seduced into treating them as sentient beings.

This year AlphaGo, a computer designed by Google’s DeepMind unit, beat the world champion at the ancient Chinese strategy game Go, deemed supremely challenging for artificial intelligence because the near-infinite range of moves was thought to be navigable only by intuition. The computational feat is impressive. But more revealing is the difficulty in describing advanced mechanics without metaphors of human experience. When AlphaGo blundered, Demis Hassabis, DeepMind’s founder, tweeted that the computer “thought it was doing well, but got confused”.

Would you rather have AlphaGo or Donald Trump make the calculations in a strategic confrontation with North Korea?

In what sense did the programme “think”? Did it “want” to win? We have no vocabulary of invisible motivation that doesn’t presume conscious cognition. We have only inverted commas to police the line between coded volition and the real thing. By contrast, we are generous in ascribing “thought” to wildly unpredictable primate impulse. Would you rather have AlphaGo or Donald Trump make the calculations in a strategic confrontation with North Korea? Probably still Trump, but only because we need to believe that there is a magic ingredient that gives humans the edge as arbiters of humanity’s interests. Or we dodge the issue with religion.

Diagnostic software already routinely beats doctors at mapping symptoms to ailments. When the contest isn’t even close, will we insist on a manual override to spare Homo sapiens’ blushes? I’m ready to bet that, when the choice comes, fallibility will be cherished over mechanical consistency. The challenge is not new. Fyodor Dostoevsky pondered it in his 1864 novella Notes from Underground. The narrator berates theories of utopian materialism that reduce the universe to an elaborate operation of cogs moving in obedience to verifiable physical laws. He concludes that people embrace irrationality – going against their ostensible self-interest – as the price of salvaging the idea of the soul: “Twice two makes four is an excellent thing, but if we are to give everything its due, twice two makes five is sometimes a very charming thing too.”

The Underground Man is not real, but that makes his argument stronger. He speaks to us from the depths of a tortured imagination – a realm that is still far beyond the capacity of Google’s algorithms. It is the zone of ambiguity and imprecision whose decline Jacobson laments when he warns that Twitter reduces us to a world of statement: “There are many good statements in the world, but much of the best part of thought and conversation isn’t statement, it’s exploration, inquiry, irony.”

It often feels as if the subtlety of analogue experience is being pulverised into platitude by the digital machine. It is easy to conjure fear of enslavement to robots, and maybe resistance is futile Luddism. But maybe also we underestimate old Ned. As testimony to the power of the imagination, he is stubbornly, reassuringly immortal.

• Rafael Behr is a Guardian columnist