'Killer robots' could become reality unless Government bans autonomous weapons, Lords report claims

Harry Yorke

“Killer robots” which threaten to “hurt, destroy or deceive human beings” could become reality unless the Government improves regulation on artificial intelligence, a Parliamentary report has suggested.

A Lords select committee has warned that while Terminator-style weapons may not yet exist, without checks and balances Britain could end up “stumbling through a semantic haze into dangerous territory.”

In a paper published today, the peers state that Britain’s definition of military-grade AI differs significantly from other NATO members, with even the US taking a more cautious approach to the technology.

While a number of countries and campaigners, including the billionaire Elon Musk, have called for preemptive legislation to outlaw the technology from use on the battlefield, the UK Government has opposed efforts to ban its development.

It comes amid growing international concern that the rapid advance of the technology could soon result in “lethal autonomous weapons” being deployed in conflict zones.

Meanwhile, at least 381 partly autonomous weapon and robotic systems are now operationational or in development  in 12 countries, including the UK, US, Israel and France.

They include an unmanned aircraft currently at prototype stage in the US, while the UK is developing its own driverless vehicles which could be weaponised in the future.

Taranis, a British undetectable drone named after the Celtic god of Thunder, can already avoid radar detection and fly in autonomous mode.

Separately, the Russian military is amassing an arsenal of aerial and ground vehicles in a situation described by experts as the “new arms race”. Last year, Vladimir Putin warned that “whoever leads in AI will rule the world”.

The concept of killer robots was famously envisioned in the Terminator films, a science fiction series directed by James Cameron and starring Arnold Schwarzenegger, in which the US defence system Skynet becomes self-aware and attempts to wipe out humanity.

Last night the chairman of the Lords committee on artificial intelligence, Lord Clement-Jones, said that it was vital that the Government adopt a set of new ethical rules to help “mitigate” the risks associated with AI.

AI timeline

“The UK has a unique opportunity to shape AI positively... rather than passively accept its consequences,” he added.

“It is essential that ethics take centre stage in AI’s development and use. AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these.

“An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”

While the Ministry of Defence classifies weapons as AI if they are “aware and show intention”, academics have warned that the UK is setting the “bar so high” that the definition is “effectively meaningless”.

In contrast, allies including France, The Netherlands and the US have adopted more cautious definitions, which the University of Sheffield’s Professor Noel Sharkey said demonstrated the UK was “out of step” with the majority of other governments.

In the report, the peers note that “without agreed definitions we could easily find ourselves stumbling through a semantic haze into dangerous territory.

“The Government’s definition of an autonomous system used by the military...is clearly out of step with the definitions of most other governments.

“This position limits both the extent to which the UK can meaningfully participate in international debates on autonomous weapons and its ability to take an active role as a moral and ethical leader on the global stage in this area.”