Some notes on Perceived infallibility of detection dog evidence: implications for juror decision-making

a good boy.

a good boy.

I just read Lisa Lit and colleagues’ “Perceived infallibility of detection dog evidence: implications for juror decision-making ” (Criminal Justice Studies). Since reading about the role of dog evidence in the wrongful conviction of Guy Paul Morin, I’ve wondered about how such evidence is used at trial - this paper filled in some of those gaps for me, and provided some new empirical evidence.

The article provided a brief background into how courts evaluate dog drug detection evidence (in the U.S.). It sounds an awful lot like how they evaluate other contentious forensic expert evidence:

Detection dog evidence is among the more technical areas of forensic science, and scientific falsifiability would be a high threshold for it to meet given the common challenges associated with this form of evidence. In practice, however, the 2013 Harris v. Florida Supreme Court case ‘closed the door on science-based challenges to the reliability of canine sniff evidence’ (Shoebotham, 2016, p. 227). Rather than confronting the fundamental usage of detection dog evidence, the Harris case accepted this practice and instead limited questions of reliability to the performance of individual dogs (Shoebotham, 2016; Taslitz, 2013).

In assessing a particular dog’s reliability, the courts weigh its training and certifications, and in some cases, field performance (Fazekas, 2012; Shoebotham, 2016; Taslitz, 2013). Anecdotal support for dog training and certification is routinely provided by handlers and dog industry professionals who are considered expert witnesses, including the training and certification protocols recommended by The Scientific Working Group on Dog and Orthogonal Detector Guidelines (SWGDOG). However, it is important to note that there are no national standards for dog training and that certifications are typically generated and provided by the private agencies that train and then ultimately sell detection dogs (Johnen, Wolfgang, & Fischer-Tenhagen, 2017; Minhinnick, 2016; Shoebotham, 2016). Accordingly, when it comes to the reliability and admissibility of detection dog evidence, the Harris case has resulted in the Daubert Standard’s threshold of scientific falsifiability being superseded by criteria that are more subjective and prone to bias.

The article goes on to mention research supporting the impetus behind the authors’ study: drug detection dogs do have an error rate (apparently about 10%, but it doesn’t say exactly how that was measured and if it’s the false positive rate). But, that some research has suggested that people ascribe a mystical infallibility to dog detection.

In the authors’ study, they provided jury-eligible individuals with a summary of a case in which a drug dog had detected a drug, but that the drug wasn’t actually found. 33.5% of participants indicated they found find the person guilty and 66.5% did not. The main finding was that there was a correlation between guilty verdicts and belief in dog drug detection.

As a whole, I found the review in this article quite useful and it’s interesting that people indicate they will find people guilty for drug offenses despite the drug not being found, and that they place such great weight on dogs. There were some exploratory analyses in here (some properly flagged as exploratory, like relationships between study variables and need for cognition). I’d encourage the authors to preregister these analyses in the future.

Some notes on Are Forensic Scientists Experts? (Alice Towler et al)

I finally read Alice Towler and colleagues’ “Are Forensic Scientists Experts?” (Journal of Applied Research in Memory and Cognition), a very useful and important guide to cognitive science’s current research on expertise and how forensic science (mostly pattern matching) fits into that research. They focus on handwriting analysis, fingerprint examination, and facial image comparison.

As to expertise, the authors say:

Cognitive scientists have studied expert performance for many decades, and as such, are well-placed to examine the question of whether forensic scientists are experts. Prominent researchers in this field have defined expertise as “consistently superior performance on a specified set of representative tasks for a domain”

As to handwriting analysis, the authors’ review finds that experts do not make more correct decisions than novices, but do avoid errors better:

Critically, research across the discipline consistently shows that the difference between handwriting examiner and novice performance is not due to the examiners making a greater proportion of correct decisions. Instead, group differences lie in the frequency of inconclusive and incorrect decisions, whereby examiners avoid making many of the errors novices make…

For fingerprint examiners, they find they are generally pretty accurate but with considerable variation between examiners:

These studies, alongside other work by cognitive scientists, provide compelling converging evidence that trained fingerprint examiners are demonstrably more accurate than untrained novices at judging whether or not two fingerprints were left by the same person … However, they also show low intra-examiner repeatability and, as a group, demonstrate a surprisingly wide range of performance.

Facial identification practitioners also make fewer errors than novices, but some practitioners perform considerably worse than others:

…more recent work has also found superior accuracy in forensic facial examiners compared to untrained novices (Towler, White, & Kemp, 2017; White, Dunn, et al., 2015), suggesting that they are indeed experts. Importantly however, all of these studies focused on group-level differences between novices and examiners. When comparing individual examiners on these tasks, large differences in their performance emerge, with some examiners making 25% errors and others achieving almost perfect accuracy.

Interestingly, they also find that expert fingerprint analysis may rely more on quick and unconscious processes than image comparison.

They end with some calls for more research in this area, which is in its infancy. Worryingly, there is a lot we don’t know about the expertise of “experts”.

Some notes on Beyond the Witness: Bringing A Process Perspective to Modern Evidence Law (Edward K Cheng & G Alexander Nunn)

I recently came across Ed Cheng and Alex Nunn’s recent paper “Beyond the Witness: Bringing A Process Perspective to Modern Evidence Law“ (forthcoming in Texas Law Review) and thought it was worth a few notes. It also converges with a lot of ideas I’ve been thinking about lately and really helped clarify my thinking.

Cheng and Nunn lay out a convincing argument that the witness-centric model is outdated and inefficient. Unlike like when the trial was invented, a great deal of evidence does not originate with human witnesses, but processes (e.g., software, video recording, business practices, etc).

In response to the rise of process, Cheng and Nunn suggest changes to several evidence rules and procedures.

For example, they would reframe the subpoena to focus on process, forcing parties using process to disclose it for testing. Although they do not discuss Rebecca Wexler’s work on claiming trade secrets over criminal justice-related algorithms, I have to think Cheng and Nunn’s model would demand disclosure in such cases. This can be seen as a new focus on transparency of procedure:

Courts cannot put machines, business processes, or other standardized systems under oath, observe their demeanor, or cross-examine them. But courts can construct new mechanisms to achieve their functional equivalents. New rules can make the processes that underlie process-based evidence more transparent to the jury, provide opportunities for an opposing party to attack them, and give guidance on how to assess their reliability.

Similarly, they would rethink the Confrontation Clause to focus on process when the evidence is mainly objective (versus, for instance, the subjective judgment of a forensic examiner who should be personally examined).

Finally, they offer ways to test the credibility of procedure: testing, transparency, and objectivity. I was especially interested in transparency:

Reliability often comes from transparency.218 A process whose internal workings and outcomes are publicly observed and subject to criticism will generally be more robust and accurate than one closely guarded. This preference for transparency extends well beyond the enhanced discovery rules proposed earlier. Enhanced discovery—that is, access and disclosure by the opposing party within the narrow confines of litigation—is the bare minimum demanded to ensure the workings of the adversarial system. Yet enhanced discovery alone is far from ideal for ensuring reliability. By contrast, a process in the public domain is subject to perpetual access and testing by any interested party, making it far more likely to be reliable.

Overall, I thought this was an excellent paper with some well-thought out solutions.

My main quibble is that I think Cheng and Nunn portray an overly rosy view of science. In particular, in their framework, they are happy to defer to the structures of science. As we know from the reproducibility crisis, however, scientific process hasn’t always worked so well. We must delve into the specifics of how the scientific structures were employed. Moreover, they seem to assume that scientists typically disclose everything when publishing.

These two quotes stood out to me as a little off:

Since information on the publication process is readily obtainable (and in some cases judicially noticeable), scientific articles and treatises are easily admissible. A jury can then assess the evidentiary weight of a treatise by considering the reliability of the publication process.

and:

While this system of editing necessarily invokes human actors, it is primarily process-based evidence. Peer review editors in the hard sciences follow pre-determined criteria when scrutinizing articles. Each citation and assertion will be analyzed for accuracy and conformance to known scientific positions. The reliability of a scientific article, then, stems primarily for the quality assurances provided by a journal rather than the ipse dixit of a particular author.

Some notes on Constructing Evidence and Educating Juries: The Case for Modular, Made-In-Advance Expert Evidence about Eyewitness Identifications and False Confessions (Jennifer L Mnookin)

At the recommendation of one of the editors of the Osgoode Hall Law Journal, I recently read Jennifer Mnookin’s excellent article on “modular” expert evidence. In our forthcoming paper, Will Crozier and I suggested that expert evidence about false confessions and the fallibility of eyewitness memory is excluded on the basis of a misunderstanding of human psychology. In short, Canadian courts deem expertise unnecessary because it simply duplicates the knowledge and experience and the factfinder. We disagree: people are often not aware of how their memory works and how strong the impact of the situation is on their behaviour (including those forces that produce false confessions).

Read More