Predicting the future offender (Part 2)

For popular demand and motivated by yet a new journalistic account of developments on predictive policing (that I got to know through the very prolific tweeter criminologist @ManneGerell), I am presenting a slightly adapted presentation from my last blog entry. So here we go…

In the last few years, we have seen an increasing interest in computational criminology. American criminologist Richard Berk has been one of the most prolific authors in this area. His wok in particular has consisted in applying algorithms and scientific practices developed in the machine learning community for improving the predictive accuracy of risk assessment tools used in criminal justice. Berk has tried to make the case that using some of the most advanced algorithms (such as random forests) we can reduce the classification error of these tools.

This type of “promise” has found an appropriate niche in our cultural context. In the West we have always keen to accept the idea of progress through technological developments. The use of predictive analytics in policing is perhaps one of the most recent chapters in this history. As I discussed in a previous entry, there’s an increasing interest among private companies to develop applications that would make them good money (out of increasingly poorer police forces) by developing predictive applications for the police. Sometimes, the marketing and publicity is a bit over the top (see here and here) and the evidence base for these applications is still developing (see the RAND report on the matter).

Debates around predictive policing for the most part have focused on applications at the area level, as a solution to prospectively identify hot spots of crime. But progressively the field is moving towards applications at the individual level (as noted in the NYT story noted above). In my last entry, I also discussed a particular example, VIOGEN (a tool used by the Spanish police in intimate partner violence cases). Equally, we are beginning to see how predictive analytics are starting to play a more important role not only in the policing field, but also even in the sentencing field (the very controversial “evidence-based sentencing”). These applications represent a nontrivial innovation. In sum:

  1. Although it is true that risk assessment and efforts at predicting the future have a long history in criminal justice, particularly in the context of probation and correctional practice (which are not exempt from controversy: see here, here or here);
  2. Technological, cultural and economic developments are significantly expanding their field of action, and this, in turn, is generating a greater scrutiny and debate.

This expansion is generating all sorts of responses. In the recent annual meeting of the European Society of Criminology in Porto there were at least two panels devoted to these issues. Adam Edwards, from Cardiff University and linked to the COSMOS project (a good example of these developments), classifies reactions in three categories: enthusiasts (see for example the TED talk by Anne Milgram); critics (see for example the paper by Sonjia Starr on evidence-based sentencing), and sceptics (Adam included).

I tend to side with the sceptics. Like Berk I think we can learn a lot from the machine learning community. My own current ESRC funded work in collaboration with computer scientists is using predictive analytics to help police better classify domestic abuse cases. Indeed, my favourite paper at ASC last year was Cynthia Rudin’s work on understanding patterns of repeats offending using really smart algorithms (BTW  I highly recommend her just started MOOC on data science in the edX platform). But at the same time we need to be cognizant of the problems and dangers that uncritical application of these tools may bring about.

First, because as the Danes say, it is very difficult to make predictions, particularly predictions about the future. This is not to say we should not try, but (a) we still have a very long way ahead of us and (b) at present these predictions need to be complemented by human judgement.  Why? The meta-analysis of Seena Fazel and colleagues (2012) not long ago concluded that “after 30 years of development, the view that violence, sexual, or criminal risk can be predicted in most cases is not evidence based. This message is important for the general public, media, and some administrations who may have unrealistic expectations of risk prediction” and that for criminal justice applications “risk assessment tools in their current form can only be used to roughly classify individuals at the group level, and not to safely determine criminal prognosis in an individual case”.  We should not ignore pronouncements of this nature.

Second, because there is a real danger of discriminatory algorithms. This is something that the mathematical and computer science community is increasingly more concerned with. Anne Milgram in her response to a critical article from the New York Times about some of these problems showed an unjustified naivety.  To say that the particular tool that they developed was not discriminatory “because it does not take into account variables such as race” is to show very little understanding of how machine learning works. This is very well explained by Moritz Hard: “a learning algorithm, is designed to pick up statistical patterns in training data. If the training data reflect existing social biases against a minority, the algorithm is likely to incorporate these biases. This can lead to less advantageous decisions for members of these minority groups. Some might object that the classifiers couldn’t possibly be biased if nothing in the feature space speaks of  the protected attributed, e.g., race. This argument is invalid. After all, the whole appeal of machine learning is that we can infer absent attributes from those that are present. Race and gender, for example, are typically redundantly encoded in any sufficiently rich feature space whether they are explicitly present or not.” (see as well Barocas y Selbts, 2015).

Let’s think of an example. We know a good predictor of future behaviour is past behaviour. In criminal justice, we don’t really have direct measures of past criminal behaviour. However, we have indirect measures such as measures of previous police or other criminal justice interventions (e.g., arrests, convictions, etc.). So one could use these indirect measures as proxies in our feature space to develop a prediction. The thing is these measures also incorporate the biases of the criminal justice system, that tends to discriminate against certain ethnic groups, residents in certain communities, the unemployed, etc. Let’s not forget, critically, that the way we evaluate these models is by looking at how well they predict a particular outcome: a new crime. And since we don’t have direct measures of crime, we also rely on proxies such as a new call for service, a new arrest, etc (which will also incorporate the biases of the criminal justice system).

The scientific and legal community, as I said, is increasingly concerned with these issues. If you are, you should have your ears open for the next conference on “Fairness, Accountability and Transparency in Machine Learning”, an excellent forum for discussing this sort of topics from a very technical angle. These are complex problems and as Barocas and Selbst (2015) argue   “addressing the sources of this unintentional discrimination and remedying the corresponding deficiencies in the law will be difficult technically, difficult legally, and difficult politically” and “there are a number of practical limits to what can be accomplished computationally.” It will be very interesting to see how the criminological community engages in these debates in forthcoming years.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s