Some notes on Predicting risk in criminal procedure: actuarial tools, algorithms, AI and judicial decision making

I just read Carolyn McKay’s article on predicting risk. It converges with some things in law and forensic science that I’ve been thinking about lately, so I thought it was worth a post.

Carolyn begins by reviewing some risk prediction techniques in legal contexts:

  • By judges using law and intuition;

  • By clinicians using their training and experience (and intuition);

  • By clinicians relying on actuarial (statistical) tools; and, more recently,

  • By algorithms, some of which are based on machine learning.

All of these have received some criticism. For instance, judges are often accused of relying on subjective hunches.

Carolyn goes on to discuss some of the practical and legal challenges that go along with algorithms and statistics. For instance, algorithms and statistics may have biases embedded in them. In the case of machine learning, the processes that may reveal such bias are difficult to scrutinize (i.e., a black box). Then there is the added problem of companies that create this code claiming trade secret privilege. Carolyn suggests this technology raises procedural justice and human rights issues and in particular, infringes on the notion of open and individualized justice. There is also an issue with legal decision makers delegating their authority to these seemingly opaque instruments.

Overall, Carolyn provides a very readable and lucid account of new risk assessment technologies.

In many ways, I think the psychology of expertise should play a role in this ongoing discussion. For instance, it’s not as if human judgment is any more transparent than these algorithms. People are not that good at knowing how they come to conclusions. Indeed, Rachel Searston and I talk about the black box nature of forensic expertise in a recent article of ours. Given the mistakes made by forensic experts, testable machine learning techniques offer more transparency and, very likely, more accuracy.

I’d also point out that a lot of the times, the black box character of machine learning is somewhat overstated.

Finally, this idea that algorithmic risk prediction can be contrary to 'open justice’ is very interesting. I’ve tried to make connections between the idea of open justice and closed forensic science twice now (here and here). As to the latter article, see:

Science—our culture's principal means of answering factual questions—is changing. It is being conducted more openly and transparently. There are many reasons for these reforms: open science is more democratic and inclusive; it enables more thorough assessment of factual claims; and, it facilitates more collaborative and efficient research. The direct impetus for many of the reforms going on in science was a crisis of confidence: opaquely conducted science was producing results that could not be reproduced.

A similar crisis of confidence may be engulfing forensic science. Attentive researchers have long noted the surprising frequency at which forensic science has committed factual mistakes. Media attention and subsequent popular knowledge seems to be catching up with this academic research. When law—a field inextricably tied to forensic science—has sought to improve confidence in its product, the answer has often been through open justice: opening courtrooms, permitting media scrutiny, and publishing decisions. It may be time that forensic science follows suit.

I’d love to see a sustained argument that ties all of these issues together and fleshes out what exactly open justice is.