Advertisement

Predictive policing poses discrimination risk, thinktank warns

<span>Photograph: Carl Court/Getty Images</span>
Photograph: Carl Court/Getty Images

Predictive policing – the use of machine-learning algorithms to fight crime – risks unfairly discriminating against protected characteristics including race, sexuality and age, a security thinktank has warned.

Such algorithms, used to mine insights from data collected by police, are currently deployed for various purposes including facial recognition, mobile phone data extraction, social media analysis, predictive crime mapping and individual risk assessment.

Researchers at the Royal United Services Institute (RUSI), commissioned by the government’s Centre for Data Ethics and Innovation, focused on predictive crime mapping and individual risk assessment and found algorithms that are trained on police data may replicate – and in some cases amplify – the existing biases inherent in the data set, such as over- or under-policing of certain communities.

“The effects of a biased sample could be amplified by algorithmic predictions via a feedback loop, whereby future policing is predicted, not future crime,” the authors said.

The paper reveals that police officers, who were interviewed for the research, are concerned about the lack of safeguards and oversight regarding the use of predictive policing.

One officer told the researchers that “young black men are more likely to be stop and searched than young white men, and that’s purely down to human bias. That human bias is then introduced into the data sets, and bias is then generated in the outcomes of the application of those data sets.”

Another officer said police forces “pile loads of resources into a certain area and it becomes a self-fulfilling prophecy, purely because there’s more policing going into that area, not necessarily because of discrimination on the part of officers”.

The technological landscape was described by one officer as a “patchwork quilt, uncoordinated and delivered to different standards in different settings and for different outcomes”.

The briefing paper identifies individuals from disadvantaged socioeconomic backgrounds who are “calculated as posing a greater risk” of criminal behaviour by algorithms.

This bias exists because individuals from this group are more likely to have frequent contact with public services and, in doing so, generate higher levels of data, which the police often have access to, the paper reveals.

The implications are serious both in terms of police allocation of resources, which may be ineffective as they are based on incorrect calculations, and legally where “discrimination claims could be brought by individuals scored ‘negatively’ in comparison to others of different ages or genders”, the paper adds.

The briefing paper also highlights the risk of “automation bias”, whereby police officers become overreliant on the use of analytical tools, undermining their discretion and causing them to disregard other relevant factors.

The paper, Data Analytics and Algorithmic Bias in Policing, by Alexander Babuta and Marion Oswald, summarises the interim findings of an ongoing independent study into the use of data analytics for policing within England and Wales and explores different types of bias that can arise.