Advertisement

Thousands of leading AI researchers sign pledge against killer robots

A man walks past an armed robotic system at a trade fair.
A man walks past an armed robotic system at a weapons trade fair. Photograph: Brendan Smialowski/Bloomberg/Getty Images

Thousands of scientists who specialise in artificial intelligence (AI) have declared that they will not participate in the development or manufacture of robots that can identify and attack people without human oversight.

Demis Hassabis at Google DeepMind and Elon Musk at the US rocket company SpaceX are among more than 2,400 signatories to the pledge which intends to deter military firms and nations from building lethal autonomous weapon systems, also known as Laws.

The move is the latest from concerned scientists and organisations to highlight the dangers of handing over life and death decisions to AI-enhanced machines. It follows calls for a preemptive ban on technology that campaigners believe could usher in a new generation of weapons of mass destruction.

Orchestrated by the Boston-based organisation, The Future of Life Institute, the pledge calls on governments to agree norms, laws and regulations that stigmatise and effectively outlaw the development of killer robots. In the absence of such measures today, the signatories pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” More than 150 AI-related firms and organisations added their names to the pledge to be announced today at the International Joint Conference on AI in Stockholm.

Yoshua Bengio, an AI pioneer at the Montreal Institute for Learning Algorithms, told the Guardian that if the pledge was able to shame those companies and military organisations building autonomous weapons, public opinion would swing against them. “This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the US did not sign the treaty banning landmines. American companies have stopped building landmines,” he said. Bengio signed the pledge to voice his “strong concern regarding lethal autonomous weapons.”

The military is one of the largest funders and adopters of AI technology. With advanced computer systems, robots can fly missions over hostile terrain, navigate on the ground, and patrol under seas. More sophisticated weapon systems are in the pipeline. On Monday, the defence secretary Gavin Williamson unveiled a £2bn plan for a new RAF fighter, the Tempest, which will be able to fly without a pilot.

For many researchers, giving machines the decision over who lives and dies crosses a moral line.

UK ministers have stated that Britain is not developing lethal autonomous weapons systems and that its forces will always have oversight and control of the weapons it deploys. But the campaigners warn that rapid advances in AI and other fields mean it is now feasible to build sophisticated weapons that can identify, track and fire on human targets without consent from a human controller. For many researchers, giving machines the decision over who lives and dies crosses a moral line.

“We need to make it the international norm that autonomous weapons are not acceptable. A human must always be in the loop,” said Toby Walsh, a professor of AI at the University of New South Wales in Sydney who signed the pledge.

“We cannot stop a determined person from building autonomous weapons, just as we cannot stop a determined person from building a chemical weapon,” he added. “But if we don’t want rogue states or terrorists to have easy access to autonomous weapons, we must ensure they are not sold openly by arms companies.”

Researchers can chose not to work on autonomous weapons, but what others do with their published breakthroughs is effectively beyond their control. Lucy Suchman, another signatory and professor of anthropology of science and technology at Lancaster University, said that even though researchers cannot fully control how their work is used, they can engage and intervene when they have concerns.

“If I were a machine vision researcher who had signed the pledge I would, first, commit to tracking the subsequent uses of my technologies and speaking out against their application to automating target recognition and, second, refuse to participate in either advising or directly helping to incorporate the technology into an autonomous weapon system,” she said.