Could Predictive Policing Lead to a Real-Life Minority Report?

Everyone knows prevention is better than a cure, and that’s as true for law enforcement as it is for medicine. But there’s little evidence that a growing trend towards “predictive policing” is the answer, and it could even bake in racial bias.

Police departments faced with tight budgets are increasingly turning to machine learning-enabled software that can sift through crime data to help predict where crimes are likely to occur and who might commit them.

Using statistics in law enforcement is nothing new. A statistical system for tracking crime called Compstat was pioneered in New York in 1994 and quickly became popular elsewhere. Since then, crime has fallen 75 percent in New York, which has been credited by some to the technology. But while Compstat simply helped identify historical hotspots, “predictive policing” uses intelligent algorithms to forecast tomorrow’s hotspots and offenders.

Software like PredPol, HunchLab and products from IBM, Microsoft and Hitachi analyze data from sources as varied as crime statistics, weather patterns, bar closing times and school term schedules to predict where crimes are most likely to happen. Elsewhere, offender-based modeling uses factors like criminal record, employment history and age to create risk profiles that are used to decide whether to grant people parole, refer them to social services or put them under surveillance.

But despite wide adoption of these systems, there is still little evidence to support their use, Aaron Shapiro, a doctoral candidate at the Annenberg School for Communication at the University of Pennsylvania, writes in Nature.

There have been few independent evaluations of the technology so far, he points out, but they have not been glowing. In a 2014 report on a predictive policing program in Shreveport, Louisiana, the non-profit RAND Corporation found “no statistical evidence that crime was reduced more in the experimental districts than in the control districts.”

Another report from RAND focused on a Minority Report-style pre-crime system designed to identify those at risk of being shot or committing a shooting, implemented by Chicago police. But they found it was either ignored by police or led them to disproportionately target those on the list for arrests.

Part of the problem is that it is very hard to evaluate these systems.

A lot of police data is private or classified, points out Shapiro, the algorithms used to analyze it are proprietary, and crime is such a complex problem there are endless confounding factors that can skew an analysis. Gold standard randomized control trials are almost impossible to organize. “The average police chief lasts three years,” former Pittsburgh Police Chief Cameron McLay told Science. “I don’t have time for controls.”

In addition, “There is no agreement as to what predictive systems should accomplish — whether they should prevent crime or help to catch criminals — nor as to which benchmarks should be used,” says Shapiro.

This is a problem, because predictive algorithms lend policing a veneer of scientific impartiality. But without the public being able to scrutinize these data-driven approaches, this could allow police departments to legitimize tactics that disproportionately target certain areas or social groups.

The racial biases inherent in police data are well established, and a model, no matter how intelligent, is only as good as its data.

“Racially-biased discretionary decisions will result in data points that the police will feed into predictive tools, which will in turn result in predictions that will have nested within them those original racial disparities,” Ezekiel Edwards, director of the ACLU Criminal Law Reform Project, writes in the Huffington Post. “As such, they will likely compound the crisis of unequal and unfair treatment of communities of color under the inveigling imprimatur of empiricism.”

Shapiro notes that many approaches try to tackle this by using only publicly-reported crimes like robbery, burglary, theft and murder rather than offenses like vandalism, drunkenness or drug sales picked up by police officers. But as immigrant communities are less likely to report crime, this could lead to them being underserved by police.

As these systems are augmented with officer monitoring systems like body cameras and GPS tracking, Shapiro says there could be a temptation to increase these more racially skewed statistics. It could also lead to a deskilling of officers as the ability to monitor them continuously could be used to justify lowering educational requirements for recruitment.

Ultimately, predictive policing is not a substitute for the regulatory and institutional changes needed to reform law enforcement in the US, says Shapiro.

If applied properly, these systems do have the potential to cut crime, highlight patterns of discrimination and save money. But this will only happen if police forces are more transparent about their use of predictive policing and vendors open up about the algorithms and criminological theories at the heart of their products.

The agencies funding many of these projects should also commit to funding studies into the possible unintended consequences of these approaches and publish guidelines for police forces looking to implement them.

I caution that even sophisticated predictive systems will not deliver police reform without regulatory and institutional changes,” writes Shapiro. “Checks and balances are needed to mitigate police discretionary power. We should be wary of relying on commercial products that can have unanticipated and adverse effects on civil rights and social justice.”

Image Credit: Shutterstock

Edd Gent
Edd Genthttp://www.eddgent.com/
I am a freelance science and technology writer based in Bangalore, India. My main areas of interest are engineering, computing and biology, with a particular focus on the intersections between the three.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured