Killer robots need 'no new rules' about firing on humans, Russia tells UN

·4-min read
Images of the British Army’s Watchkeeper Unmanned Aerial System being prepared and launched from RAF Akrotiri. The Watchkeeper is operated by crews of 47 Regiment Royal Artillery (47RA). After completing basic crew training in the UK, the crews come out to Akrotiri to complete vital currencies and flying checkpoints which is facilitated by the good weather around Cyprus. After Ground Technicians have prepared the aircraft it is dragged onto the Runway before pilots throttle up and hit the skies completing numerous sorties all year around.
Images of the British Army’s Watchkeeper Unmanned Aerial System being prepared and launched from RAF Akrotiri. The Watchkeeper is operated by crews of 47 Regiment Royal Artillery (47RA). After completing basic crew training in the UK, the crews come out to Akrotiri to complete vital currencies and flying checkpoints which is facilitated by the good weather around Cyprus. After Ground Technicians have prepared the aircraft it is dragged onto the Runway before pilots throttle up and hit the skies completing numerous sorties all year around.

Lethal drone weapon systems require “no new regulations” over whether they can fire on humans, Russia has said, as the Red Cross warns so-called "killer robots" should not "decide who lives or dies".

Speaking on Tuesday at a UN conference in Geneva on the ethics of lethal autonomous weapons, the Russian delegate said such systems “ought to comply with the principles of necessity and proportionality” in the same way as human soldiers.

The conference, running until August 13 and attended by diplomats from 50 countries, hopes to establish regulations to prevent “killer robots” making their own decisions.

There was a “current lack of convincing justification for imposing new restrictions or prohibitions” on such weapons, Russia's delegate said.

“The high level of autonomy of these weapons allows [them] to operate within a dynamic conflict situation and in various environments while maintaining an appropriate level of selectivity and precision.

“As a result it ensures the compliance with [existing] rules of international humanitarian law.”

Risk of conflict escalation if machines make life-or-death decisions

The Russian position was not supported by other delegates. The US called for more regulations and for the conference to "endorse more practices" over the use of such weapons.

Dr Neil Davison, a scientific and policy adviser at the International Committee of the Red Cross (ICRC), said autonomous systems were dangerous because “the user doesn't actually choose what they're fired at, when they fire or exactly where they fire, so there’s inherent risk to civilians in that."

Speaking on the BBC Radio's Today programme, he warned of the “risks of conflict escalation” by allowing machines a greater say in when to use lethal force.

“Humans must apply the rules of international humanitarian law in carrying out attacks so weapons that function in this way complicate that.

“Our view is that an algorithm shouldn't decide who lives or dies.”

The value of AI

Autonomous weapon systems using Artificial Intelligence (AI) include drones able to operate in the air, on the land and above and below water.

Proponents say they limit the risk to human life by allowing fewer soldiers to be placed in harm’s way, whilst ethics campaigners fear humans will eventually be removed from the decision-making process of when to open fire.

Last year’s conflict between Armenia and Azerbaijan showed how drones can have a decisive edge on the modern battlefield.

Over 40 per cent of Armenia’s tanks and armoured vehicles were destroyed by so-called suicide drones that were able to scan the ground to identify military kit before attacking.

Experts warn that achieving international consensus on a “legally binding obligation to retain meaningful human control over the use of force is difficult yet imperative to achieve”.

The 'violation of human dignity'

Frank Sauer, a member of the International Committee for Robot Arms Control, and research fellow at the Bundeswehr University in Munich, said “unintended conflict escalation at machine speed and the violation of human dignity outweigh any short-term military benefits”.

“Some states, most prominently Russia, have displayed no interest in producing new international law” to regulate autonomy in weapon systems, Mr Sauer said in a paper for the International Review of the Red Cross,

Producing any new law was difficult as it “requires new diplomatic language and because the military value of weapon autonomy is hard to forgo in the current arms control winter”.

“The strategic as well as ethical risks outweigh the military benefits of unshackled weapon autonomy,” he warned.

The ‘one ring to rule them all’ is AI

Speaking exclusively to the Telegraph in Estonia earlier this year, General Sir Patrick Sanders, the head of Britain’s Strategic Command, said developing AI systems was not a quest for killer robots.

The UK military’s cyber chief said Britain would be “mad” not to aim to be a world leader in AI, adding such systems would be central to emerging technologies such as quantum computing, biotechnology and the military’s use of cyberspace.

“Of all the new technologies, the one ring to rule them all is AI.

“There’s a lot of concern out there about killer robots and ethics. Actually the real use of AI is to support humans, to be under the command of humans.

“The idea of human-machine teaming implies you can team with a computer. There isn’t a team, the humans are in charge.”

Our goal is to create a safe and engaging place for users to connect over interests and passions. In order to improve our community experience, we are temporarily suspending article commenting