Facial recognition may be in its infancy but soon enough it will raise difficult questions for us all

The main criticism about the widespread use of facial recognition technology by the police, now being trialled by the Metropolitan Police in London, is that it doesn’t actually work. Paradoxically, this disarms the many critics of the system because such a surveillance technique cannot be deemed a threat to liberty if it achieves a pitifully low rate of success.

It is no good for catching criminals, or at least of very, limited value; but, by the same token, poses a fairly limp threat to personal privacy. Wearing a comedy Groucho Marx disguise would enable even the most vicious terrorist to evade detection, if not ridicule. The fact that the success rate is around 2 per cent is, if anything, remarkable given the circumstances.

All that, though, is to miss the point. Before long facial recognition software could grow far more reliable and sophisticated. So it requires some thought and effort now both to control its use and win public support.

Mass screenings, of the type currently underway in Oxford Street, are wrong in principle if they are covert rather than overt and there is inadequate public consultation or explanation made as to what is going on. This is certainly the case with the Met’s experiments. Not the least of the possible problems is the preference among some Muslim people for their images not to be captured in photography.

Although announced by way of a press release, the many shoppers traipsing round Oxford Street whose facial signatures were being registered seem unaware of what is going on. They may or may not object to it, on a variety of grounds, innocent or nefarious. However, it is wrong, again in principle, if such a procedure were undertaken without people knowing and without good cause, such as the threat of an imminent terror attack. It cannot be right that the Met is conducting these experiments in a regulatory vacuum, especially when there is so much justified concern about privacy of personal details held in digital form.

There is a case to be made for the targeted use of such screening, provided it has the consent of those taking part. Taylor Swift, for example, is reported to use the software in order to screen potentially dangerous stalkers form her concerts. That, in principle, is no different to a doorkeeper with an especially good memory preventing troublemakers from entering a nightclub, or a football club using mug shots to stop hooligans entering their grounds.

The country, obviously, has many more urgent issues to settle before facial recognition becomes anything like a routine tool in the armoury of law enforcement and counter-terrorism. However, it is not too early for the authorities themselves to develop a more robust set of guidelines than exist at the moment, both for private and public sector users of the software.

The opportunities for snooping, misuse and blackmail are obvious. Plain-clothes officers, unmarked vans and covert operation generally are not likely to reassure the public. Here, as in so many areas, the law is failing to keep up with rapidly growing technology. Meantime, criminals will continue to evade capture, via light cosmetic surgery and simple disguises to defy the best efforts of the authorities. Like most issues in policing, facial recognition is a cat and misuse game between cops and criminals.