Advertisement

Don’t worry about AI going bad – the minds behind it are the danger

Yul Brynner’s malevolent robot in the film Westworld (1973).
Fears over machine intelligence have been a sci-fi staple for decades: Yul Brynner’s malevolent robot in the film Westworld (1973). Photograph: MGM/Kobal/REX/Shutterstock

As the science fiction novelist William Gibson famously observed: “The future is already here – it’s just not very evenly distributed.” I wish people would pay more attention to that adage whenever the subject of artificial intelligence (AI) comes up. Public discourse about it invariably focuses on the threat (or promise, depending on your point of view) of “superintelligent” machines, ie ones that display human-level general intelligence, even though such devices have been 20 to 50 years away ever since we first started worrying about them. The likelihood (or mirage) of such machines still remains a distant prospect, a point made by the leading AI researcher Andrew Ng, who said that he worries about superintelligence in the same way that he frets about overpopulation on Mars.

That seems about right to me. If one were a conspiracy theorist, one might ask if our obsession with a highly speculative future has been deliberately orchestrated to divert attention from the fact – pace Mr Gibson – that lower-level but exceedingly powerful AI is already here and playing an ever-expanding role in shaping our economies, societies and politics. This technology is a combination of machine learning and big data and it’s everywhere, controlled and deployed by a handful of powerful corporations, with occasional walk-on parts assigned to national security agencies.

These corporations regard this version of “weak” AI as the biggest thing since sliced bread. The CEO of Google burbles about “AI everywhere” in his company’s offerings. Same goes for the other digital giants. In the face of this hype onslaught, it takes a certain amount of courage to stand up and ask awkward questions. If this stuff is so powerful, then surely we ought to be looking at how it is being used, asking whether it’s legal, ethical and good for society – and thinking about what will happen when it gets into the hands of people who are even worse than the folks who run the big tech corporations. Because it will.

Fortunately, there are scholars who have started to ask these awkward questions. There are, for example, the researchers who work at AI Now, a research institute at New York University focused on the social implications of AI. Their 2017 report makes interesting reading. Last week saw the publication of more in the same vein – a new critique of the technology by 26 experts from six major universities, plus a number of independent thinktanks and NGOs.

The bots and fake Facebook accounts that currently pollute our public sphere will look awfully amateurish in a couple of years

Its title – The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation – says it all. The report fills a serious gap in our thinking about this stuff. We’ve heard the hype, corporate and governmental, about the wonderful things AI can supposedly do and we’ve begun to pay attention to the unintentional downsides of legitimate applications of the technology. Now the time has come to pay attention to the really malign things bad actors could do with it.

The report looks at three main “domains” in which we can expect problems. One is digital security. The use of AI to automate tasks involved in carrying out cyber-attacks will alleviate the existing trade-off between the scale and efficacy of attacks. We can also expect attacks that exploit human vulnerabilities (for example, through the use of speech synthesis for impersonation), existing software vulnerabilities (through automated hacking) or the vulnerabilities of legitimate AI systems (through corruption of the data streams on which machine learning depends).

A second threat domain is physical security – attacks with drones and autonomous weapons systems. (Think v2.0 of the hobbyist drones that Isis deployed, but this time with face-recognition technology on board.) We can also expect new kinds of attacks that subvert physical systems – causing autonomous vehicles to crash, say – or ones deploying physical systems that would be impossible to remotely control from a distance: a thousand-strong swarm of micro-drones, for example.

Finally, there’s what the authors call “political security” – using AI to automate tasks involved in surveillance, persuasion (creating targeted propaganda) and deception (eg, manipulating videos). We can also expect new kinds of attack based on machine-learning’s capability to infer human behaviours, moods and beliefs from available data. This technology will obviously be welcomed by authoritarian states, but it will also further undermine the ability of democracies to sustain truthful public debates. The bots and fake Facebook accounts that currently pollute our public sphere will look awfully amateurish in a couple of years.

The report is available as a free download and is worth reading in full. If it were about the dangers of future or speculative technologies, then it might be reasonable to dismiss it as academic scare-mongering. The alarming thing is most of the problematic capabilities that its authors envisage are already available and in many cases are currently embedded in many of the networked services that we use every day. William Gibson was right: the future has already arrived.