AI Predicts 90 Percent Of Crime Before It Happens, Creator Argues It Won’t Be Misused

AI Predicts 90 Percent Of Crime Before It Happens, Creator Argues It Won’t Be Misused

Currently being tested in several US cities, a new AI created to anticipate crimes before they occur predicts them correctly about 80–90% of the time. There are many worries regarding AI’s dismal track record and its inbuilt biases, despite the fact that it is allegedly developed for policy optimization and resource distribution to the sections of a city that need it most.

The creator has since conducted an interview with BBC Science Focus. And he spoke extensively. Included in this would be an explanation of why he thinks the use of his AI would be beneficial.

Professor Ishanu Chattopadhyay and colleagues provide a predictive AI model in eight significant US cities in their most recent research, which was published in Nature Human Behaviour. It’s an easy idea to grasp: For each crime, the city of Chicago publishes event logs that include the location and time of the incident. This information is fed into a machine learning algorithm. In order to produce what the researchers refer to as a “time series,” the city is then divided into 90 square meter (1,000 square foot) regions. The AI analyzes these time series to forecast crimes based on the locations and timings where they frequently occur.

In essence, the model can predict that “there will probably be an armed robbery at this precise location on this specific day,” but it cannot predict who would do the crime. This sets it apart from other crime-detection systems, including the AI that we previously covered and which identifies persons who are most likely to commit crimes (which was, of course, horrifically racist and flawed).

People worry that this will be a technique used to imprison people before they commit crimes. That is not going to happen since it lacks the power to do so. It only forecasts an event at a certain location, according to Chattopadhyay, speaking to BBC Science Focus.

The precise dynamics or physics of the events are not revealed, nor is it stated who would do the act.

This raises one of the most crucial issues on the subject: Does this model succumb to the same racial and socioeconomic prejudices that so many before it have? This is a theme of many other social predictive AI.

Chattopadhyay claims that because they trained the model by merely feeding it event logs, it is able to avoid these biases. There are allegedly minimal manual inputs, which is a positive thing.

“We have made every effort to minimize bias. The way in which our model differs from earlier models is as follows: “Added he.

It is now unclear whether cities will start utilizing such a model for formulating policies or if they will stay away due to its dismal history. The researchers appear convinced that their AI can avoid these problems, but given the numerous opportunities for misuse that such systems present, many people are still skeptics.