Some of the biggest companies in the world are pulling their facial recognition technologies from law enforcement agencies across the country. Amazon (AMZN), IBM (IBM), and Microsoft (MSFT) have said that they will either put a moratorium on the use of their technology by police — or are completely exiting the field citing human rights concerns.
The technology, which can be used to identify suspects in things like surveillance footage, has faced widespread criticism after studies found it can be biased against women and people of color. And according to at least one expert, there needs to be some form of regulation put in place if these technologies are going to be used by law enforcement agencies.
“If these technologies were to be deployed, I think you cannot do it in the absence of legislation,” explained Siddharth Garg, assistant professor of computer science and engineering at NYU Tandon School of Engineering, told Yahoo Finance.
“I think some of this will be challenged in court, and we’ll finally find a solution potentially that balances the risk and the benefits, to whatever extent there are benefits.”
Amazon, IBM, and Microsoft out
The first company to make a move away from offering the technology to law enforcement was IBM. On June 8, CEO Arvind Krishna sent a letter to several lawmakers, including Senators Cory Booker (D-NJ) and Kamala Harris (D-CA), indicating that the company would no longer be developing facial recognition software. The move came in the wake of the death of George Floyd and subsequent protests against police brutality and over-policing.
“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” Krishna wrote.
“Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.”
Amazon followed suit with a June 10 announcement that it was placing a one-year moratorium on the use of its Rekognition technology by law enforcement agencies.
“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge,” the company said in a statement.
“We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”
The following day at a Washington Post Live event, Microsoft President Brad Smith said the tech giant would also refuse to offer its facial recognition technology to police departments until comprehensive legislation is put in place that regulates the tech.
Amazon, IBM, and Microsoft aren’t the only vendors of facial recognition technology. For example, Clearview AI, which came under heavy scrutiny for the way it collected images for its algorithm via Facebook, told Yahoo Finance it will continue to offer its tech to police departments.
Facial recognition bias
One of the main criticisms against facial recognition technology’s use by law enforcement is that it can show bias against women and people of color. In a study, MIT’s Joy Buolamwini and the University of Toronto’s Inioluwa Deborah Raji found that Amazon, Microsoft, IBM and software company Megvii were significantly less accurate when identifying women and people of color. Women of color were particularly impacted by the results. Amazon, however, pushed back saying its own studies and customer reactions ran counter to the study’s results.
Another study of 189 algorithms from some 99 companies by the National Institute of Standards and Technology found that the majority of the software tested had some form of bias.
According to Garg, bias in facial recognition algorithms crops up due to the data fed to the algorithm. Algorithms “learn” by being provided with massive amounts of information. In the instance of a facial recognition algorithm that may include large batches of photos.
If the algorithms are fed more photos of one group over another, it could result in discrepancies in performance.
“The algorithms aren’t biased, but there is bias in the algorithms to be precise,” Garg explained. “There is well demonstrated evidence of bias in face recognition technology.”
A call for legislation
The tech industry is now calling on the federal government to craft legislation regulating facial recognition technologies. But this isn’t the first time the issue has been raised. Senator Booker previously proposed legislation alongside Senator Jeff Merkley (D-OR) calling for a moratorium on the use of facial recognition technology by the federal government until Congress established a legal framework for its use.
A recent proposal by Representative Karen Bass (D-CA) that focused on larger police reforms touches on the use of facial recognition technology, but isn’t specifically about the use of such software.
Facial recognition technology has already worked its way into different parts of our lives, whether it’s the use of the tech airports, at customs checkpoints, or even in our own smartphones. Whether a full legislative press comes to fruition, however, is very much up in the air.
Got a tip? Email Daniel Howley at firstname.lastname@example.org over via encrypted mail at email@example.com, and follow him on Twitter at @DanielHowley.
More from Dan: