Philip K. Dick asked more than 60 years ago how would the world be if there were a technology allowing to know in advance if a human would commit a crime. This is how he created The Minority Report, a short story that years later would be brought to the film by Steven Spielberg, starring Tom Cruise. In the book there are mutants, called precogs, who can predict the future and are used by security forces to combat crimes and murders until they carry out. Even though it is a tale of science fiction, the obsession with predicting the crime comes from long-standing and refers to the identification with bedbugs in a paper map of the places where crimes occurred. This police practice is what allows at a glance to identify areas where crimes occur more frequently. But given the hassle of mapping at hand, for years around the world police departments invest to get this diagnosis and prediction, no longer through maps paper and bugs but algorithms. Without going any further, the United Kingdom police wants to predict violent crimes using artificial intelligence with the intention that people who are identified algorithmically advise them is and will join to avoid possible criminal behavior. The system is called National Data Analytics Solution (NDAS) and uses a combination of artificial intelligence and statistics to try to assess the risk so that someone committed a crime – or become victim of one, as well as the possibility that someone fall in the hands of trafficking networks. The project is carried out by the police in the Midlands with the participation of forces from London and Manchester and is expected to be ready to be used by all the authorities of the United Kingdom in March 2019.
These types of developments are not isolated cases of technological innovation in safety. Some time ago that it is evaluating the use of artificial intelligence to predict the actions of potential offenders. For example, PredPol is a system developed by the University of Santa Clara in California that was created to identify which points in the city they are likely suffer more robberies, also based on statistics. Another system, used in Los Angeles, assigns a rating to variables related to prior convictions or are known gang members. Using this technology the patrols change their travels to closely monitor the «riskier» people. Returning to Europe, the Netherlands used another tool that analyzes data on crimes and social data in specific areas, such as the ages of persons, their income and if you have social plans. This tool is also used to predict where in a city is more likely to occur delitos.Como has to be expected, the use of such solutions presents a wealth of philosophical questions. Is it a criminal someone who still did not commit a crime? And if you do not commit it, does mean that this technology is wrong? Is allow it free if algorithms advance both to have a minimum percentage of error? Will they have this type of technology the transparency needed to be able to be audited by a third party? Not all innovation is good as it was, the application of this technology generated doubts among the world of technology and philosophy. Indeed, a group of experts from the Alan Turing Institute, one of the most important in England related to artificial intelligence based in London, published a paper which while recognizing the value of these developments, questioned part of its logic underlying.» Among our concerns are the ethical dangers of an inaccurate prediction (false positive or negative) given the State of the art of predictive monitoring. «We believe that predictive risk model must undergo a rigorous evaluation of the effectiveness and the ethical impact, and that it should include a strong program of monitoring and evaluation», wrote from Instituto.Martin Innes, director of the Institute of Investigation of crimes and security from the University of Cardiff, United Kingdom, said he is «skeptical» in relation to the functioning of the system at the individual level. For the tool will be most useful to locate communities at risk in general as it is in the Netherlands.
«Our concerns include the ethical dangers of an inaccurate forecast,» said a group of experts.
In November 2018 first British police force which incorporated predictive ability decided to leave the project after 5 years. Police in Kent, using the PredPol solution, considered that while the tool met expectations, its use did not result in a remarkable decline in criminal incidents. Other reasons were the high cost (approximately $100 thousand a year) and the dependency of an external company. What aim is to eventually be able to develop such solutions internally. The questions we should ask ourselves in such cases is: is it OK to use technological advances that are far from being perfect and that could directly affect people living within societies? After all, it would not be the first time an artificial intelligence to discriminate on the biases that their creators have. It happened at Amazon, where was created an algorithm for finding the best talent and this technology understood that men were better and began to discriminate against women. Logically it was already given low, but not even Philip K. Dick imagined is that scenario. In this note: