Advertisement

Met police to begin using live facial recognition cameras

<span>Photograph: Kirsty O’Connor/PA</span>
Photograph: Kirsty O’Connor/PA

The Metropolitan police will start using live facial recognition, Britain’s biggest force has announced.

The decision to deploy the controversial technology, which has been dogged by privacy concerns and questions over its lawfulness, was immediately condemned by civil liberties groups, who described the move as “a breathtaking assault on our rights”.

But the Met said that after two years of trials, it was ready to use the cameras within a month. The force said it would deploy the technology overtly and only after consulting communities in which it is to be used.

Nick Ephgrave, an assistant commissioner, said: “As a modern police force, I believe that we have a duty to use new technologies to keep people safe in London. Independent research has shown that the public support us in this regard.”

The Met said the cameras would be linked to a database of suspects. If the system detects someone who is not on the database, their information will not be kept. But if it generates an alert because the person is wanted, an officer will speak to them.

Johanna Morley, a senior technologist with the Met, said the system was 70% effective at spotting wanted suspects. It falsely identified someone as wanted one in a thousand times, she said.

Facial recognition graphic

Civil liberties groups reacted angrily to the move, with one promising to mount an immediate challenge to the decision.

Silkie Carlo, the director of Big Brother Watch, called the decision “an enormous expansion of the surveillance state and a serious threat to civil liberties in the UK”.

Disputing the Met’s figures on its effectiveness, she said: “It flies in the face of the independent review showing the Met’s use of facial recognition was likely unlawful, risked harming public rights and was 81% inaccurate.

“This is a breathtaking assault on our rights and we will challenge it, including by urgently considering next steps in our ongoing legal claim against the Met and the home secretary.”

What is facial recognition software?

Automated facial recognition (AFR) is technology that can identify people by analysing and comparing facial features to those held in a database.

Where is it used?

You might recognise it from auto-tagging of pictures on Facebook or on your phone, but it is increasingly being used out in the real world.

Shoppers at retail parks such as Westfield in London, for example, are routinely scanned and recorded by dozens of hidden cameras built into the centres’ digital advertising billboards. The cameras can determine not only your age and gender, but your mood, cueing up tailored advertisements within seconds, thanks to facial detection technology.

British police have also used the technology to scan crowds at events and demonstrations to identify ‘people of interest’.

What are the concerns about it?

In the UK a court action claims that south Wales police violated privacy and data protection rights by using facial recognition technology on individuals. The police force defended their actions saying that AFR was similar to the use of DNA to solve crimes and would have little impact on those who were not suspects.

The UK’s biometrics commissioner has warned that police forces are pushing ahead with the use of AFR systems in the absence of clear laws on whether, when or how the technology should be employed.

The pressure group Liberty has denounced AFR as 'arsenic in the water supply of democracy', and the city of San Francisco has already barred the use of automatic facial recognition by law enforcement.

A crucial argument against police’s deployment of the technology is that it doesn’t yet work very well. It is especially inaccurate and prone to bias when used against people of colour: a test of Amazon’s facial recognition software found that it falsely identified 28 members of US Congress as known criminals, with members of the Congressional Black Caucus disproportionately represented.

There was anger in January 2020 among football supporters and civil rights activists after the technology was used to scan the crowd at the Cardiff City-Swansea City derby.

A spokesperson for the campaign group Liberty said: “This is a dangerous, oppressive and completely unjustified move by the Met. Facial recognition technology gives the state unprecedented power to track and monitor any one of us, destroying our privacy and our free expression.”

Ephgrave said the technology would not be used indiscriminately and that its initial use would be limited. “The Met will begin operationally deploying live facial recognition (LFR) at locations where intelligence suggests we are most likely to locate serious offenders,” he said. “Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences.”

Last year, after a series of court fights, judges ruled in favour of live facial recognition. Police believe it paved the way for Friday’s announcement.

South Wales police already use live facial recognition. The Met said that system was less effective at scanning dense crowds.

Ephgrave said: “At a deployment, cameras will be focused on a small, targeted area to scan passersby. The cameras will be clearly signposted and officers deployed to the operation will hand out leaflets about the activity. The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body-worn video or ANPR [automatic number-plate recognition].”