Some notes on The impact of proficiency testing information and error aversions on the weight given to fingerprint evidence

I recently read Greg Mitchell and Brandon Garrett’s The impact of proficiency testing information and error aversions on the weight given to fingerprint evidence.

Mitchell and Garrett performed a behavioural study, informing mock jurors of the the results of a fingerprint examiner’s proficiency tests. They also manipulated the types of errors the proficiency tests revealed (if any): false positives and false negatives. They wanted to see if this information would affect the weight given to the evidence and the mock jurors’ perception of the strength of the case.

Mitchell and Garrett found that their participants were sensitive to proficiency information. In particular, when proficiency was higher, participants reported it was more likely the accused in the mock trial left the prints. There wasn’t, however, a significant difference between perfect and high proficiency, and high and medium. I wonder if these are the most likely levels of proficiency for most examiners, and thus whether in practice these results will matter. Still, it’s very good to know that proficiency information has some effect.

The participants did not seem to distinguish between types of error, suggesting that it perhaps became too technical for them at that point and they needed more direction.

Finally, I wish that Mitchell and Garrett had included a DV about whether the juror would vote to convict beyond a reasonable doubt. I’d be curious to see if just showing the examiner had 98% proficiency could drop votes from convict to not guilty.

Some notes on The Admissibility of Forensic Reports in the Post-Justice Scalia Supreme Court

I recently read Laird Kirkpatrick’s The Admissibility of Forensic Reports in the Post-Justice Scalia Supreme Court.

It’s a short paper about recent U.S. jurisprudence on requiring forensic examiners to come into court and be examined on their written reports (i.e., the scope of the Confrontation Clause). I knew, from reading Cheng and Nunn’s recent piece, that this was a controversial area right now, but I didn’t realize how much it was and how much Scalia’s absence would impact the future jurisprudence. I also tend to agree with Cheng and Nunn that I am not sure how much a live witness really adds to such matters, and that I think transparency and openness of the process is more important. Still, this area of law may help determine the admissibility of pre-recorded expert evidence, an area I am very interested in.

It might be useful for me to just summarize the cases reviewed in Kirkpatrick’s article.

Melendez-Diaz v Massachusetts, 557 US 305 (2009): This was a 5-4 decision concerning the constitutionality of a MA law allowing forensic reports to be tendered without calling the analyst who made the report. The instant report was about whether a certain substance was cocaine.

The majority decision, written by Justice Scalia, said that this was a testimonial statement and so it triggered the confrontation clause.

The dissent seemed to be all over the place, saying that maker of the report wasn’t a conventional “accusatory” witness. They also discussed some policy reasons militating in favour of exempting forensic reports, like it not being clear which analyst should appear, logistical difficulties, and the fact that it wouldn’t be hard fro the accused to just subpoena the analyst.

The majority judgment noted previous problems with the state’s forensic scientific evidence, a point I’m glad they drew attention to.

Williams v Illinois, 567 US 50 (2012): This case is even more confusing. It concerned the report from a private DNA testing service (Cellmark) that created a latent DNA profile based on semen found on the complainant. The Cellmark analyst was not called to testify, which drew Confrontation Clause concerns.

A four justice plurality allowed the evidence because it was not being offered for the truth of its contents and because it was not accusatorial. Justice Thomas concurred but rejected the plurality’s reasons, rather saying the report was not testimonial for lack of formality - it was not certified (I don’t have sufficient background in US evidence law to follow this, but it seems rather technical).

The dissent would have held that the Confrontation Clause was violated. In particular, there was no chance to examine the Cellmark analyst on his or her proficiency.

Stuart v Alabama, 139 S Ct 36 (2018): The Supreme Court recently denied cert in this case, but the two dissenting judgments are very interesting.

Justice Gorsuch (joined by Justice Sotomayor) discussed the problems with forensic science and the importance of testing it in court. He said this was clearly admitted for the truth of its contents (unlike the plurality in Williams) and I agree . He also did not agree with Justice Thomas’s concurrence about certification.

As a result of this cert decision, it seems like application of the confrontation clause to forensic reports is in considerable doubt with Gorusch perhaps taking up Scalia’s mantle

I’d like to see a future decision take a careful look at exactly what adversarial testing might really reveal about a forensic report. And can some sort of controlled transparency reveal the same information more efficiently, much the way the open science movement is working to revealing the uncertainties in the scientific process?

Some notes on Perceived infallibility of detection dog evidence: implications for juror decision-making

a good boy.

a good boy.

I just read Lisa Lit and colleagues’ “Perceived infallibility of detection dog evidence: implications for juror decision-making ” (Criminal Justice Studies). Since reading about the role of dog evidence in the wrongful conviction of Guy Paul Morin, I’ve wondered about how such evidence is used at trial - this paper filled in some of those gaps for me, and provided some new empirical evidence.

The article provided a brief background into how courts evaluate dog drug detection evidence (in the U.S.). It sounds an awful lot like how they evaluate other contentious forensic expert evidence:

Detection dog evidence is among the more technical areas of forensic science, and scientific falsifiability would be a high threshold for it to meet given the common challenges associated with this form of evidence. In practice, however, the 2013 Harris v. Florida Supreme Court case ‘closed the door on science-based challenges to the reliability of canine sniff evidence’ (Shoebotham, 2016, p. 227). Rather than confronting the fundamental usage of detection dog evidence, the Harris case accepted this practice and instead limited questions of reliability to the performance of individual dogs (Shoebotham, 2016; Taslitz, 2013).

In assessing a particular dog’s reliability, the courts weigh its training and certifications, and in some cases, field performance (Fazekas, 2012; Shoebotham, 2016; Taslitz, 2013). Anecdotal support for dog training and certification is routinely provided by handlers and dog industry professionals who are considered expert witnesses, including the training and certification protocols recommended by The Scientific Working Group on Dog and Orthogonal Detector Guidelines (SWGDOG). However, it is important to note that there are no national standards for dog training and that certifications are typically generated and provided by the private agencies that train and then ultimately sell detection dogs (Johnen, Wolfgang, & Fischer-Tenhagen, 2017; Minhinnick, 2016; Shoebotham, 2016). Accordingly, when it comes to the reliability and admissibility of detection dog evidence, the Harris case has resulted in the Daubert Standard’s threshold of scientific falsifiability being superseded by criteria that are more subjective and prone to bias.

The article goes on to mention research supporting the impetus behind the authors’ study: drug detection dogs do have an error rate (apparently about 10%, but it doesn’t say exactly how that was measured and if it’s the false positive rate). But, that some research has suggested that people ascribe a mystical infallibility to dog detection.

In the authors’ study, they provided jury-eligible individuals with a summary of a case in which a drug dog had detected a drug, but that the drug wasn’t actually found. 33.5% of participants indicated they found find the person guilty and 66.5% did not. The main finding was that there was a correlation between guilty verdicts and belief in dog drug detection.

As a whole, I found the review in this article quite useful and it’s interesting that people indicate they will find people guilty for drug offenses despite the drug not being found, and that they place such great weight on dogs. There were some exploratory analyses in here (some properly flagged as exploratory, like relationships between study variables and need for cognition). I’d encourage the authors to preregister these analyses in the future.

Some notes on Are Forensic Scientists Experts? (Alice Towler et al)

I finally read Alice Towler and colleagues’ “Are Forensic Scientists Experts?” (Journal of Applied Research in Memory and Cognition), a very useful and important guide to cognitive science’s current research on expertise and how forensic science (mostly pattern matching) fits into that research. They focus on handwriting analysis, fingerprint examination, and facial image comparison.

As to expertise, the authors say:

Cognitive scientists have studied expert performance for many decades, and as such, are well-placed to examine the question of whether forensic scientists are experts. Prominent researchers in this field have defined expertise as “consistently superior performance on a specified set of representative tasks for a domain”

As to handwriting analysis, the authors’ review finds that experts do not make more correct decisions than novices, but do avoid errors better:

Critically, research across the discipline consistently shows that the difference between handwriting examiner and novice performance is not due to the examiners making a greater proportion of correct decisions. Instead, group differences lie in the frequency of inconclusive and incorrect decisions, whereby examiners avoid making many of the errors novices make…

For fingerprint examiners, they find they are generally pretty accurate but with considerable variation between examiners:

These studies, alongside other work by cognitive scientists, provide compelling converging evidence that trained fingerprint examiners are demonstrably more accurate than untrained novices at judging whether or not two fingerprints were left by the same person … However, they also show low intra-examiner repeatability and, as a group, demonstrate a surprisingly wide range of performance.

Facial identification practitioners also make fewer errors than novices, but some practitioners perform considerably worse than others:

…more recent work has also found superior accuracy in forensic facial examiners compared to untrained novices (Towler, White, & Kemp, 2017; White, Dunn, et al., 2015), suggesting that they are indeed experts. Importantly however, all of these studies focused on group-level differences between novices and examiners. When comparing individual examiners on these tasks, large differences in their performance emerge, with some examiners making 25% errors and others achieving almost perfect accuracy.

Interestingly, they also find that expert fingerprint analysis may rely more on quick and unconscious processes than image comparison.

They end with some calls for more research in this area, which is in its infancy. Worryingly, there is a lot we don’t know about the expertise of “experts”.

Some notes on Beyond the Witness: Bringing A Process Perspective to Modern Evidence Law (Edward K Cheng & G Alexander Nunn)

I recently came across Ed Cheng and Alex Nunn’s recent paper “Beyond the Witness: Bringing A Process Perspective to Modern Evidence Law“ (forthcoming in Texas Law Review) and thought it was worth a few notes. It also converges with a lot of ideas I’ve been thinking about lately and really helped clarify my thinking.

Cheng and Nunn lay out a convincing argument that the witness-centric model is outdated and inefficient. Unlike like when the trial was invented, a great deal of evidence does not originate with human witnesses, but processes (e.g., software, video recording, business practices, etc).

In response to the rise of process, Cheng and Nunn suggest changes to several evidence rules and procedures.

For example, they would reframe the subpoena to focus on process, forcing parties using process to disclose it for testing. Although they do not discuss Rebecca Wexler’s work on claiming trade secrets over criminal justice-related algorithms, I have to think Cheng and Nunn’s model would demand disclosure in such cases. This can be seen as a new focus on transparency of procedure:

Courts cannot put machines, business processes, or other standardized systems under oath, observe their demeanor, or cross-examine them. But courts can construct new mechanisms to achieve their functional equivalents. New rules can make the processes that underlie process-based evidence more transparent to the jury, provide opportunities for an opposing party to attack them, and give guidance on how to assess their reliability.

Similarly, they would rethink the Confrontation Clause to focus on process when the evidence is mainly objective (versus, for instance, the subjective judgment of a forensic examiner who should be personally examined).

Finally, they offer ways to test the credibility of procedure: testing, transparency, and objectivity. I was especially interested in transparency:

Reliability often comes from transparency.218 A process whose internal workings and outcomes are publicly observed and subject to criticism will generally be more robust and accurate than one closely guarded. This preference for transparency extends well beyond the enhanced discovery rules proposed earlier. Enhanced discovery—that is, access and disclosure by the opposing party within the narrow confines of litigation—is the bare minimum demanded to ensure the workings of the adversarial system. Yet enhanced discovery alone is far from ideal for ensuring reliability. By contrast, a process in the public domain is subject to perpetual access and testing by any interested party, making it far more likely to be reliable.

Overall, I thought this was an excellent paper with some well-thought out solutions.

My main quibble is that I think Cheng and Nunn portray an overly rosy view of science. In particular, in their framework, they are happy to defer to the structures of science. As we know from the reproducibility crisis, however, scientific process hasn’t always worked so well. We must delve into the specifics of how the scientific structures were employed. Moreover, they seem to assume that scientists typically disclose everything when publishing.

These two quotes stood out to me as a little off:

Since information on the publication process is readily obtainable (and in some cases judicially noticeable), scientific articles and treatises are easily admissible. A jury can then assess the evidentiary weight of a treatise by considering the reliability of the publication process.

and:

While this system of editing necessarily invokes human actors, it is primarily process-based evidence. Peer review editors in the hard sciences follow pre-determined criteria when scrutinizing articles. Each citation and assertion will be analyzed for accuracy and conformance to known scientific positions. The reliability of a scientific article, then, stems primarily for the quality assurances provided by a journal rather than the ipse dixit of a particular author.

Some notes on Constructing Evidence and Educating Juries: The Case for Modular, Made-In-Advance Expert Evidence about Eyewitness Identifications and False Confessions (Jennifer L Mnookin)

At the recommendation of one of the editors of the Osgoode Hall Law Journal, I recently read Jennifer Mnookin’s excellent article on “modular” expert evidence. In our forthcoming paper, Will Crozier and I suggested that expert evidence about false confessions and the fallibility of eyewitness memory is excluded on the basis of a misunderstanding of human psychology. In short, Canadian courts deem expertise unnecessary because it simply duplicates the knowledge and experience and the factfinder. We disagree: people are often not aware of how their memory works and how strong the impact of the situation is on their behaviour (including those forces that produce false confessions).

Read More