Australian army uses telepathy to control robot dogs
How will militaries of the future control robot troops on battlefields where communication is vulnerable to gunfire and explosions? The Australian army thinks the solution is mind control.
Its soldiers recently used “telepathy” to guide a robot dog on a simulated patrol as part of two successful demonstrations. Army personnel donned an off-the-shelf Microsoft Hololens 2 mixed reality headset with a custom AI decoder that could capture and translate their brain waves into useful commands. The latter was built using a low-cost computer known as a Raspberry Pi.
That sets the method apart from other brain-machine interface technology that uses invasive brain implants, such as Elon Musk’s Neuralink, or medical devices that can help patients who have lost the ability to speak.
In a video released by the Australian army, a soldier is shown “telepathically” directing a four-legged Vision 60 Ghost robot across a series of waypoints in an open field. The officer chose the destinations for the droid by thinking about them.
Following that test, the army conducted a second series of challenges in which a soldier used brain signals to guide the same droid as it helped human soldiers to clear a number of buildings.
“The potential of the project is very broad,” said Sgt Damien Robinson of the Australian army’s Combat Service Support Battalion. “At its core, it’s translating brain waves into zeros and ones, and that can be implemented to a number of different systems.”
The tests were conducted by the Robotic and Autonomous Systems Implementation and Coordination Office of the Australian Army. It was supported by researchers from the University of Technology Sydney, the Defence Innovation Hub, and the Defence Science and Technology Group.
Despite hailing the success of the demonstrations, the Australian army insists that mind-controlled robots won’t appear on real-life battlefields soon. How far are we from such a future? The US Department of Defense, which is funding research into brain-machine interfaces, has previously said that a suitable system for military purposes is probably decades away.
There is also the broader hot button topic of the ethics of using killer robots in law enforcement and warfare.
San Francisco officials recently reversed a policy that would have allowed the police to use remote-controlled lethal machines, following an outcry from citizens and civil rights groups. Before that, the NYPD had cancelled a contract to use Boston Dynamics’ robot dog as a crime-fighting sidekick because of concerns about police militarisation and abuses of force.
Here in the UK, the Ministry of Defence previously said that its policy is that only human soldiers will be able to fire weapons. Earlier in February, more than 60 countries including the US and China signed a “call to action” endorsing the responsible use of artificial intelligence in the military. However, human rights groups warned that the statement was not legally binding, and did not address fears over “slaughterbots” that could kill without human intervention.