The algorithm was created by social scientists at the University of Chicago who tested and validated the model using historical data on violent crimes (homicides, assaults, and batteries) and property crimes (burglaries, thefts, and motor vehicle thefts) in the city.
These crimes were chosen because they are less likely to face the type of enforcement bias that is common in drug-related and similar offences.
Not only in Chicago did the algorithm appear to have a precognitive ability to predict crime.
The system performed similarly when data from other US cities was fed into it: Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco.
The algorithm, unlike previous future-crime detection tools, does not depict crime as spreading from hotspots to surrounding areas.
This method can overlook the city’s complex social environment as well as the relationship between crime and the effects of police enforcement.
“Spatial models ignore the natural topology of the city,” said James Evans, PhD, sociologist and co-author, Max Palevsky Professor at UChicago and the Santa Fe Institute.
“Streets, walkways, train and bus lines are all respected by transportation networks. Communication networks are sensitive to areas with a similar socioeconomic background. These connections can be discovered using our model.”
Despite its accuracy, lead author Chattopadhyay cautioned that the tool should not be used to direct police forces.
Departments should not use it to proactively swarm neighborhoods to prevent crime, for example. “We created a digital twin of urban environments. If you feed it data from what happened in the past, it will tell you what’s going to happen in the future,” Chattopadhyay said. “It’s not magical; there are limitations, but we validated it and it works really well.”
To read our blog on “In a final farewell to the Facebook era, Meta unfriends the FB ticker,” click here













