Efforts to use AI to predict crime have been fraught with controversy due to the potential to replicate existing biases in policing. But a new system powered by machine learning holds the promise of not only making better predictions but also highlighting these biases.
If there’s one thing that modern machine learning is good at, it’s spotting patterns and making predictions. So, it’s perhaps unsurprising that many in the policy and law enforcement world are keen to put these skills to use. Proponents want to train AI models with historical crime records and other relevant data to predict when and where crimes are likely to happen and use the results to direct policing efforts.
The problem is that this kind of data often hides all kinds of biases that can be too easily replicated when used to train algorithms unthinkingly. Previous approaches have sometimes included spurious variables such as the presence of graffiti or demographic data, which can easily lead models to make flawed associations based on racial or socioeconomic criteria.
Even basic police data on reported crimes or numbers of arrests can contain hidden biases. Heavy policing of certain areas assumed to be high in crime due to pre-existing prejudices will almost inevitably lead to more arrests. And in areas with high distrust of police, crimes can often go unreported.
Nonetheless, being able to anticipate trends in criminal activity ahead of time could benefit society. So, a group from the University of Chicago has developed a new machine learning system that can predict when and where crimes are likely to happen better than previous systems and also be used to probe systemic biases in policing.
The researchers first collated several years’ worth of data from Chicago police on violent and property crimes, as well as the number of arrests resulting from each incident. They used this data to train a suite of AI models that show how changes in each of these variables impacts the others.
This allowed the team to predict crime levels in 1,000-foot-wide areas of the city up to a week in advance with 90 percent accuracy, as reported in a recent paper in Nature Human Behaviour. The researchers also showed their approach achieved similar accuracy when trained on data from seven other US cities. And when they tested it on a dataset from a predictive policing challenge run by the National Institute of Justice, they outperformed the best approach in 119 of 120 testing categories.
The researchers put their success down to abandoning approaches that impose spatial constraints on the model by assuming crime appears in hotspots before spreading to surrounding areas. Instead, their model was able to capture more complex connections that could be mediated by transport links, communication networks, or demographic similarities between different regions of the city.
However, in recognition that the data used for the study was likely to have been tainted by existing biases in policing practices, the researchers also investigated how their model could be used to uncover how such prejudices could be distorting the way law enforcement deploys its resources.
When the team artificially boosted the levels of both violent and property crime in wealthier neighborhoods, the arrests jumped, as those in poorer areas dropped. In contrast, when crime levels were boosted in poor areas, there was no rise in arrests. The implication, say the researchers, is that wealthier neighborhoods are prioritized by police and can draw resources away from poorer ones.
To validate their findings, the researchers also analyzed the raw police data, using the seasonal increase in crime during the summer months to investigate the effect of elevated crime rates in different areas. The results mirrored the trends identified by their model.
Despite its accuracy, study leader Ishanu Chattopadhyay said in a press release that the tool should not be used to directly determine the allocation of police resources, but instead as a tool to investigate better policing strategies. He describes the system as a “digital twin of urban environments” that can help police understand the effects of varying crime or enforcement levels across different parts of the city.
Whether the research can help direct the field of predictive policing in a more conscientious and responsible direction remains to be seen, but any effort to balance the technology’s public safety potential against its sizable risks is a step in the right direction.