Some notes on The impact of proficiency testing information and error aversions on the weight given to fingerprint evidence

I recently read Greg Mitchell and Brandon Garrett’s The impact of proficiency testing information and error aversions on the weight given to fingerprint evidence.

Mitchell and Garrett performed a behavioural study, informing mock jurors of the the results of a fingerprint examiner’s proficiency tests. They also manipulated the types of errors the proficiency tests revealed (if any): false positives and false negatives. They wanted to see if this information would affect the weight given to the evidence and the mock jurors’ perception of the strength of the case.

Mitchell and Garrett found that their participants were sensitive to proficiency information. In particular, when proficiency was higher, participants reported it was more likely the accused in the mock trial left the prints. There wasn’t, however, a significant difference between perfect and high proficiency, and high and medium. I wonder if these are the most likely levels of proficiency for most examiners, and thus whether in practice these results will matter. Still, it’s very good to know that proficiency information has some effect.

The participants did not seem to distinguish between types of error, suggesting that it perhaps became too technical for them at that point and they needed more direction.

Finally, I wish that Mitchell and Garrett had included a DV about whether the juror would vote to convict beyond a reasonable doubt. I’d be curious to see if just showing the examiner had 98% proficiency could drop votes from convict to not guilty.

Some notes on The Admissibility of Forensic Reports in the Post-Justice Scalia Supreme Court

I recently read Laird Kirkpatrick’s The Admissibility of Forensic Reports in the Post-Justice Scalia Supreme Court.

It’s a short paper about recent U.S. jurisprudence on requiring forensic examiners to come into court and be examined on their written reports (i.e., the scope of the Confrontation Clause). I knew, from reading Cheng and Nunn’s recent piece, that this was a controversial area right now, but I didn’t realize how much it was and how much Scalia’s absence would impact the future jurisprudence. I also tend to agree with Cheng and Nunn that I am not sure how much a live witness really adds to such matters, and that I think transparency and openness of the process is more important. Still, this area of law may help determine the admissibility of pre-recorded expert evidence, an area I am very interested in.

It might be useful for me to just summarize the cases reviewed in Kirkpatrick’s article.

Melendez-Diaz v Massachusetts, 557 US 305 (2009): This was a 5-4 decision concerning the constitutionality of a MA law allowing forensic reports to be tendered without calling the analyst who made the report. The instant report was about whether a certain substance was cocaine.

The majority decision, written by Justice Scalia, said that this was a testimonial statement and so it triggered the confrontation clause.

The dissent seemed to be all over the place, saying that maker of the report wasn’t a conventional “accusatory” witness. They also discussed some policy reasons militating in favour of exempting forensic reports, like it not being clear which analyst should appear, logistical difficulties, and the fact that it wouldn’t be hard fro the accused to just subpoena the analyst.

The majority judgment noted previous problems with the state’s forensic scientific evidence, a point I’m glad they drew attention to.

Williams v Illinois, 567 US 50 (2012): This case is even more confusing. It concerned the report from a private DNA testing service (Cellmark) that created a latent DNA profile based on semen found on the complainant. The Cellmark analyst was not called to testify, which drew Confrontation Clause concerns.

A four justice plurality allowed the evidence because it was not being offered for the truth of its contents and because it was not accusatorial. Justice Thomas concurred but rejected the plurality’s reasons, rather saying the report was not testimonial for lack of formality - it was not certified (I don’t have sufficient background in US evidence law to follow this, but it seems rather technical).

The dissent would have held that the Confrontation Clause was violated. In particular, there was no chance to examine the Cellmark analyst on his or her proficiency.

Stuart v Alabama, 139 S Ct 36 (2018): The Supreme Court recently denied cert in this case, but the two dissenting judgments are very interesting.

Justice Gorsuch (joined by Justice Sotomayor) discussed the problems with forensic science and the importance of testing it in court. He said this was clearly admitted for the truth of its contents (unlike the plurality in Williams) and I agree . He also did not agree with Justice Thomas’s concurrence about certification.

As a result of this cert decision, it seems like application of the confrontation clause to forensic reports is in considerable doubt with Gorusch perhaps taking up Scalia’s mantle

I’d like to see a future decision take a careful look at exactly what adversarial testing might really reveal about a forensic report. And can some sort of controlled transparency reveal the same information more efficiently, much the way the open science movement is working to revealing the uncertainties in the scientific process?

Some notes on The Frailties of Human Memory the Accused’s Right to Accurate Procedures

Roberts makes a strong argument that many of the procedural safeguards surrounding eyewitness identifications should be extended to evidence of other witnesses. This includes expert evidence about memory and judicial warnings.

He also notes that if we are interested in the best evidence, the initial witness statement should be recorded and admitted as an exception to the hearsay rule (which is allowed by legislation in the UK. It strikes me that this is similar to a recent argument by Cheng and Nunn that original witness-centric model that developed hundreds of years ago should be carefully amended in light of modern technology.

Some notes on The forensic disclosure model: What should be disclosed to, and by, forensic experts?

I recently read Mohammed A. Almazrouei, Itiel E. Dror, and Ruth M. Morgan’s The forensic disclosure model: What should be disclosed to, and by, forensic experts?.

Almazrouei and colleagues lay out a model of “forensic disclosure” or general principles for what forensic scientists should be told about a case and what they should disclose about their work. The article focuses on the psychology of forensic examiners and thus the information that should be disclosed about the factors that may have biased the examiner:

The importance of considering the role of judgement and decision making within the forensic science process has been demonstrated in a range of studies (Roux et al., 2012; Taylor et al., 2016a, 2016b; Stevenage and Bennett, 2017; Nakhaeizadeh et al., 2017). Decision making is an inherent and intrinsic part of the forensic science process (Morgan et al., 2018), and yet it is arguably one of the least well defined and articulated parts in the delivery of a forensic reconstruction. It has been identified that decision making in forensic science is susceptible to extrinsic and intrinsic factors and that unconscious biases can occur in a wide range of different scenarios. Therefore, for minimally biased evaluations of forensic materials and for transparency to be achieved, ‘forensic disclosure’ needs to be considered. Forensic disclosure establishes what should be disclosed ‘to’ and ‘by’ forensic examiners. It is about making sure that forensic examiners get the necessary task-relevant information and forensic materials, and that examiners then provide the relevant information and materials to the appropriate people. In addition to information and forensic evidence management, forensic disclosure allows transparency of the context, interactions and pathways of decision making within the forensic science process, thereby disclosing how an inference was made, and within what context.

The forensic disclosure model we present here therefore addresses both what is disclosed to the forensic examiner prior to and during analysis being undertaken, and what the forensic examiner discloses formally in their report and court testimony, as well as in any informal discussions and interactions (e.g., discussions with other forensic examiners, lawyers, police officers, etc.). There are five questions that need addressing for effective implementation of the forensic disclosure model (which are illustrated with examples in Section 2): What information was given to and given by the forensic examiner? When? By/to whom? How? and Why?

I thought this article was very useful in putting together a great deal of research about what factors influence forensic examiners and then linking that to disclosure. Moreover, it gives several tangible examples of the framework it is laying out. I’ve also been thinking and writing a lot about transparency in expert evidence (see here and here). In particular, I’ve been suggesting that practices from open science are even more important in criminal legal contexts where accused parties require a full understanding of the scientific case they face. This matches up well with Almazrouei and colleagues’ conclusion:

When possible, forensic experts should share the actual evidence material with other experts, even and especially, with the experts retained by the opposing legal side. This should be done as a matter of routine, without requiring a subpoena from the court or approval by the lawyers. Scientists should share the evidence materials and findings so other scientists can run their own tests, make their own observations, interpretations, conclusions and forensic reconstruction in order that the court can be equipped with a full picture of the forensic science evidence and what it means in a specific case. Sometimes this is not technically and pragmatically possible (e.g., when a test destroys the evidence), but there are instances when there are no practical reasons to preclude sharing the evidence. Especially in the adversarial legal system, the concept of ‘sharing’ is currently minimal; often there is no willingness among the legal sides to share forensic science evidence findings, forensic evidence materials, or to ‘share’ a common forensic expert (Dror et al., 2018b).

According to the forensic disclosure model, scientific evidence should be routinely shared by both the prosecution and the defence regardless of the side retaining the forensic expert(s), or who they work for. Not only does the adversarial legal system make such a ‘sharing’ difficult, but economic interests can often pose issues around sharing knowledge (especially in a commercial market environment where such knowledge can have commercial value). The forensic disclosure model that we put forward in this paper is underpinned by the assertion that science and fair justice mandates that there is an openness and sharing of scientific findings, and even the scientific evidence materials. Forensic disclosure calls for transparency and maximum disclosure of the science, so science can be used (rather than abused or misused) in the administration of justice within the legal system.

Whilst there are legal and commercial obstacles that can hinder the effective implementation of this element of the forensic disclosure model, the fair administration of justice is paramount. Therefore, transparent and open discussions, particularly between the legal and forensic science domains, on the concept of routinely sharing the scientific evidence among both sides of the adversarial legal system are necessary looking forward. Policy changes can enable change, such as when the disclosure practice in the UK were revised (McCartney, 2018), but there is still some way to go to achieve the transparency and disclosure that is necessary.

There are lots of open science concepts in there!

It was also good to think about some complementary ideas, especially from the realm of human factors. Going forward, I will be very interested to see if forensics labs take this guidance seriously. While some clearly are (e.g., the Houston Forensic Science Center) , it seems to me that beyond the lack of a framework (until now), the biggest thing holding them back is a lack of institutional will.

Some notes on Perceived infallibility of detection dog evidence: implications for juror decision-making

a good boy.

a good boy.

I just read Lisa Lit and colleagues’ “Perceived infallibility of detection dog evidence: implications for juror decision-making ” (Criminal Justice Studies). Since reading about the role of dog evidence in the wrongful conviction of Guy Paul Morin, I’ve wondered about how such evidence is used at trial - this paper filled in some of those gaps for me, and provided some new empirical evidence.

The article provided a brief background into how courts evaluate dog drug detection evidence (in the U.S.). It sounds an awful lot like how they evaluate other contentious forensic expert evidence:

Detection dog evidence is among the more technical areas of forensic science, and scientific falsifiability would be a high threshold for it to meet given the common challenges associated with this form of evidence. In practice, however, the 2013 Harris v. Florida Supreme Court case ‘closed the door on science-based challenges to the reliability of canine sniff evidence’ (Shoebotham, 2016, p. 227). Rather than confronting the fundamental usage of detection dog evidence, the Harris case accepted this practice and instead limited questions of reliability to the performance of individual dogs (Shoebotham, 2016; Taslitz, 2013).

In assessing a particular dog’s reliability, the courts weigh its training and certifications, and in some cases, field performance (Fazekas, 2012; Shoebotham, 2016; Taslitz, 2013). Anecdotal support for dog training and certification is routinely provided by handlers and dog industry professionals who are considered expert witnesses, including the training and certification protocols recommended by The Scientific Working Group on Dog and Orthogonal Detector Guidelines (SWGDOG). However, it is important to note that there are no national standards for dog training and that certifications are typically generated and provided by the private agencies that train and then ultimately sell detection dogs (Johnen, Wolfgang, & Fischer-Tenhagen, 2017; Minhinnick, 2016; Shoebotham, 2016). Accordingly, when it comes to the reliability and admissibility of detection dog evidence, the Harris case has resulted in the Daubert Standard’s threshold of scientific falsifiability being superseded by criteria that are more subjective and prone to bias.

The article goes on to mention research supporting the impetus behind the authors’ study: drug detection dogs do have an error rate (apparently about 10%, but it doesn’t say exactly how that was measured and if it’s the false positive rate). But, that some research has suggested that people ascribe a mystical infallibility to dog detection.

In the authors’ study, they provided jury-eligible individuals with a summary of a case in which a drug dog had detected a drug, but that the drug wasn’t actually found. 33.5% of participants indicated they found find the person guilty and 66.5% did not. The main finding was that there was a correlation between guilty verdicts and belief in dog drug detection.

As a whole, I found the review in this article quite useful and it’s interesting that people indicate they will find people guilty for drug offenses despite the drug not being found, and that they place such great weight on dogs. There were some exploratory analyses in here (some properly flagged as exploratory, like relationships between study variables and need for cognition). I’d encourage the authors to preregister these analyses in the future.

UQ Expert Evidence Colloquium, Pt 2 (February 12, 2019)

On February 12, 2019, the UQ Law, Science and Technology Program convened two talks on expert evidence at the Banco Court (find the program here). 

Emma Cunliffe

Emma Cunliffe

For those who could not make it, and for those wishing to follow up on the talks, we have compiled brief summaries and lists of the authorities the speakers relied on.

You may be especially interested in this article that was mentioned during the open discussion: Gary Edmond et al, ‘How to cross-examine forensic scientists: A guide for lawyers’ (2016) 39 Australian Bar Review[PDF]

Emma Cunliffe - ‘Evaluating Forensic Medicine’

Emma discussed expert evidence in the form of forensic medicine and the challenges it poses to the legal system. In particular, forensic pathologists have often made strong claims that, if taken at face value, can be very prejudicial at trial. They are also susceptible to heuristics (i.e., mental shortcuts that may produce inaccurate results) and biases (i.e., error in a predictable direction) due to the adversarial forces they are exposed to. Further, performance at trial is a poor form of feedback for the experts. They may often never learn if they were right or wrong and may get a false sense of confidence from participating in the trial and seeing their evidence accepted.

Emma’s slides can be found here.

See also:

Rachel Dioso-Villa, ‘Detective and Scientist: Unpacking fire Investigation and Its Application in the Courtroom

Rachel discussed the fire investigation evidence that is often admitted as expert evidence. This evidence raises a host of issues because the putative experts also wear the hat of investigator, and thus are deeply involved in the case. Moreover, several methods for determining the cause of a fire have not been systematically studied or validated.

I noticed a common theme between the two talks was the danger of an absence of evidence serving as positive evidence. In forensic medical cases, this may be the expert never having seen three natural infant deaths in the same family and concluding something sinister must have happened. A similar thing happens with “negative corpus” evidence in fire cases, in which the expert cannot come up with a natural cause. The trouble seems to be that we often don’t know what we don’t know - how can you be confident you’ve eliminated possible causes that you may have never thought of?

Rachel’s slides can be found here.

See also:

I would also like to apologize for the technical difficulties. We failed to notify the court’s booking team that we would need AV support. The fault is solely with us.

Some notes on Are Forensic Scientists Experts? (Alice Towler et al)

I finally read Alice Towler and colleagues’ “Are Forensic Scientists Experts?” (Journal of Applied Research in Memory and Cognition), a very useful and important guide to cognitive science’s current research on expertise and how forensic science (mostly pattern matching) fits into that research. They focus on handwriting analysis, fingerprint examination, and facial image comparison.

As to expertise, the authors say:

Cognitive scientists have studied expert performance for many decades, and as such, are well-placed to examine the question of whether forensic scientists are experts. Prominent researchers in this field have defined expertise as “consistently superior performance on a specified set of representative tasks for a domain”

As to handwriting analysis, the authors’ review finds that experts do not make more correct decisions than novices, but do avoid errors better:

Critically, research across the discipline consistently shows that the difference between handwriting examiner and novice performance is not due to the examiners making a greater proportion of correct decisions. Instead, group differences lie in the frequency of inconclusive and incorrect decisions, whereby examiners avoid making many of the errors novices make…

For fingerprint examiners, they find they are generally pretty accurate but with considerable variation between examiners:

These studies, alongside other work by cognitive scientists, provide compelling converging evidence that trained fingerprint examiners are demonstrably more accurate than untrained novices at judging whether or not two fingerprints were left by the same person … However, they also show low intra-examiner repeatability and, as a group, demonstrate a surprisingly wide range of performance.

Facial identification practitioners also make fewer errors than novices, but some practitioners perform considerably worse than others:

…more recent work has also found superior accuracy in forensic facial examiners compared to untrained novices (Towler, White, & Kemp, 2017; White, Dunn, et al., 2015), suggesting that they are indeed experts. Importantly however, all of these studies focused on group-level differences between novices and examiners. When comparing individual examiners on these tasks, large differences in their performance emerge, with some examiners making 25% errors and others achieving almost perfect accuracy.

Interestingly, they also find that expert fingerprint analysis may rely more on quick and unconscious processes than image comparison.

They end with some calls for more research in this area, which is in its infancy. Worryingly, there is a lot we don’t know about the expertise of “experts”.

Some notes on Beyond the Witness: Bringing A Process Perspective to Modern Evidence Law (Edward K Cheng & G Alexander Nunn)

I recently came across Ed Cheng and Alex Nunn’s recent paper “Beyond the Witness: Bringing A Process Perspective to Modern Evidence Law“ (forthcoming in Texas Law Review) and thought it was worth a few notes. It also converges with a lot of ideas I’ve been thinking about lately and really helped clarify my thinking.

Cheng and Nunn lay out a convincing argument that the witness-centric model is outdated and inefficient. Unlike like when the trial was invented, a great deal of evidence does not originate with human witnesses, but processes (e.g., software, video recording, business practices, etc).

In response to the rise of process, Cheng and Nunn suggest changes to several evidence rules and procedures.

For example, they would reframe the subpoena to focus on process, forcing parties using process to disclose it for testing. Although they do not discuss Rebecca Wexler’s work on claiming trade secrets over criminal justice-related algorithms, I have to think Cheng and Nunn’s model would demand disclosure in such cases. This can be seen as a new focus on transparency of procedure:

Courts cannot put machines, business processes, or other standardized systems under oath, observe their demeanor, or cross-examine them. But courts can construct new mechanisms to achieve their functional equivalents. New rules can make the processes that underlie process-based evidence more transparent to the jury, provide opportunities for an opposing party to attack them, and give guidance on how to assess their reliability.

Similarly, they would rethink the Confrontation Clause to focus on process when the evidence is mainly objective (versus, for instance, the subjective judgment of a forensic examiner who should be personally examined).

Finally, they offer ways to test the credibility of procedure: testing, transparency, and objectivity. I was especially interested in transparency:

Reliability often comes from transparency.218 A process whose internal workings and outcomes are publicly observed and subject to criticism will generally be more robust and accurate than one closely guarded. This preference for transparency extends well beyond the enhanced discovery rules proposed earlier. Enhanced discovery—that is, access and disclosure by the opposing party within the narrow confines of litigation—is the bare minimum demanded to ensure the workings of the adversarial system. Yet enhanced discovery alone is far from ideal for ensuring reliability. By contrast, a process in the public domain is subject to perpetual access and testing by any interested party, making it far more likely to be reliable.

Overall, I thought this was an excellent paper with some well-thought out solutions.

My main quibble is that I think Cheng and Nunn portray an overly rosy view of science. In particular, in their framework, they are happy to defer to the structures of science. As we know from the reproducibility crisis, however, scientific process hasn’t always worked so well. We must delve into the specifics of how the scientific structures were employed. Moreover, they seem to assume that scientists typically disclose everything when publishing.

These two quotes stood out to me as a little off:

Since information on the publication process is readily obtainable (and in some cases judicially noticeable), scientific articles and treatises are easily admissible. A jury can then assess the evidentiary weight of a treatise by considering the reliability of the publication process.


While this system of editing necessarily invokes human actors, it is primarily process-based evidence. Peer review editors in the hard sciences follow pre-determined criteria when scrutinizing articles. Each citation and assertion will be analyzed for accuracy and conformance to known scientific positions. The reliability of a scientific article, then, stems primarily for the quality assurances provided by a journal rather than the ipse dixit of a particular author.

Neurolaw in Australia: The Use of Neuroscience in Australian Criminal Proceedings

I recently posted preprints of the above article (coauthored with Armin Alimardani), forthcoming in Neuroethics. Here is the abstract:

Recent research has detailed the use of neuroscience in several jurisdictions, but Australia remains a notable omission. To fill this substantial void we performed a systematic review of neuroscience in Australian criminal cases. The first section of this article reports the results of our review by detailing the purposes for which neuroscience is admitted into Australian criminal courts. We found that neuroscience is being admitted pre-trial (as evidence of fitness to stand trial), at trial (to support the defence of insanity and substantial impairment of the mind), and during sentencing. In the second section, we evaluate these applications. We generally found that courts admit neuroscience cautiously, and to supplement more well-established forms of evidence. Still, we found some instances in which the court seemed to misunderstand the neuroscience. These cases ranged from interpreting neuroscience as “objective” evidence to admitting neuroscience when the same non-neuroscientific psychiatric evidence would be inadmissible for being common sense. Furthermore, in some cases, neuroscientific evidence presents a double-edged sword; it may serve to either aggravate or mitigate a sentence. Thus, the decision about whether or not to tender this evidence is risky.

You can find it on the following services:

SSRN: Neurolaw in Australia: The Use of Neuroscience in Australian Criminal Proceedings

LawArXiV: Neurolaw in Australia: The Use of Neuroscience in Australian Criminal Proceedings

UQ Expert Evidence Colloquium (Aug 30, 2018)

On August 30th 2018, the UQ Law, Science and Technology Program convened a series of talks on expert evidence at the Supreme Court Library (find the program here). 

For those who could not make it, and for those wishing to follow up on the talks, we have compiled brief summaries and lists of the authorities the speakers relied on.

Essential Reading

1.     A user-friendly guide to cross-examining forensic experts:

Gary Edmond et al, ‘How to cross-examine forensic scientists: A guide for lawyers’ (2016) 39 Australian Bar Review[PDF]

2.     A short and easy-to-read review of the forensic sciences from a leading scientific body:

President’s Council of Advisors on Science and Technology, ‘Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods’ (2016). [link]

3.     A slightly longer review of the forensic sciences from a leading scientific body:

National Research Council, ‘Strengthening Forensic Science in the United States: A Path Forward’ (2009). [link]

Rachel Searston & David Hamer

Rachel Searston & David Hamer

Dr Rachel Searston – ‘Expert Evidence and Evidence on Expertise: Are Forensic Experts’ Decisions a Blackbox?’

Summary: Rachel discussed the psychological processes by which people gain expertise, and why it is typically impossible for them to explain how they are making their decisions. She focused on fingerprint experts.

Nguyen v R [2017] NSWCCA 4

Jason M Tangen, Matthew B Thompson & Duncan J McCarthy D, ‘Identifying fingerprint expertise’ (2011) 22:8 Psychological Science 995. [PDF]

Alice Towler et al, ‘Are Forensic Scientists Experts?‘ (2018) 7:2 Journal of Applied Research in Memory and Cognition 199-208. [link]

David Hamer

David Hamer

Professor David Hamer – ‘Expert Evidence and Wrongful Convictions in the Adversarial Process’

Summary: David examined the many cases in which expert evidence has contributed to a wrongful conviction. He suggested that the adversarial system is not particularly well-suited to controlling unreliable and misleading expert evidence.

R v Keogh (No 2) [2014] SASCFC 136

Wood NSW [2018] NSWSC 1247

Burrel v R (2008) 238 CLR 218

Cheng v R (2000) 203 CLR 248

Darkan v R (2006) 227 CLR 373

R v Taufahema (2007) 228 CLR 232

R v Van Beelen [2016] SASCFC 71

R v Hodges [2018] QCA 92

Mickelberg v R (1989) 167 CLR 259

Lawless v R (1979) 142 CLR 659

Ratten v R (1974) 131 CLR 510

Meachen [2009] All ER (D) 45 (EWCA Crim 1701)

DPP Guidelines (Queensland)

Expert Code of Conduct (Uniform Civil Procedure Rules) (NSW)

Judicial Commission of NSW, Conviction Appeals in NSW (2011)

Simon Cole, ‘Forensic Science and Wrongful Convictions: From exposer to contributor to corrector’ (2012) 46 New England Law Review 711. [link]

Stewart Field & Dennis Eady, ‘Truth-finding and the adversarial tradition: the experience of the Cardiff Law School Innocence Project’ (2017) Criminal Law Review 292. [link]

Stephanie Roberts, ‘Fresh Evidence and Factual Innocence in the Criminal Division of the Court of Appeal’ (2017) 81 Journal of Criminal Law 303. [PDF]

Bibi Sangha & Robert Moles, ‘MacCormick’s Theory of Law, Miscarriages of Justice and the Statutory Basis for Appeals in Australian Criminal Cases’ (2014) 37 University of New South Wales Law Journal 243. [link]

Benjamin Dighton

Benjamin Dighton

Benjamin Dighton – The language of expert evidence and the law

Summary: Ben discussed the challenges of translating expert evidence to lay factfinders.

Lewis v The Queen (1987) 88 FLR 104

King v Parker [1912] VLR 152

Briginshaw v Briginshaw (1938) 60 CLR 336

Clark v Ryan (1960) 103 CLR 486

Makita (Australia) Pty Ltd v Sprowles (2001) 52 NSWLR 705

R v Kleimeyer [2014] QCA 56

R v Sica [2014] 2 Qd R 168

FBI Laboratory Announces Discontinuation of Bullet Lead Examinations (2005)

David H Kaye, The Double Helix and the Law of Evidence (June 26, 2009) Penn State Legal Studies Research Paper No. 9-2010. [link]

Mehera San Roque

Mehera San Roque

Mehera San Roque – ‘After Admissibility: Expertise and Imaginary Law’

Summary: Mehera, focusing on Uniform Evidence Law jurisdictions, found courts have not adequately kept unreliable expert evidence from impacting decisions. For example, expert codes of conduct are not always enforced, courts do not engage with leading reports demonstrating that proffered evidence is unreliable, and notionally expert evidence is sometimes admitted under obscure common law exceptions.

Honeysett v R [2014] HCA 29

Chen v R [2018] NSWCCA 106

The Queen v Dickman [2017] HCA 24

IMM v The Queen [2016] HCA 14

JP v DPP [2015] NSWSC 1669

Nguyen v R [2017] NSWCCA 4

Smith v The Queen (2001) 206 CLR 650

Evidence Act (No 25) 1995 (NSW) ss 76, 78, 79, 137

National Institute of Standards and Technology, ‘Latent Print Examination and Human Factors: Improving the Practice through a Systems Approach’ (2012). [link]

Mehera San Roque & Kaye Ballantyne

Mehera San Roque & Kaye Ballantyne

Kaye Ballantyne – ‘To Err is Human’

Summary: Kaye discussed the validity of several fields of forensic science. The reality is that, for many of these fields, the method’s accuracy is completely unknown and examiners cannot say how likely it is that they made an error.  

See the PCAST and National Academy of Sciences Reports above.

Kathryn McMillan QC

Kathryn McMillan QC

Kathryn McMillan QC - ' Beyond Common Knowledge: Reviewing the use of Social Science Evidence in Australian Courts'

Summary: Kathryn considered the (considerable) challenge of developing a rational and practical system for bringing social scientific knowledge into the courtroom.   

JJB v The Queen [2006] NSWCCA 126

Longman v The Queen (1989) 168 CLR 79

R v Fong [1981] Qd R 90

Osland v The Queen [1998] HCA 75

Runjanjic v The Queen (1991) 56 SASR 114

Farrell v The Queen [1998] HCA 50

National Domestic and Family Violence Benchbook

Annie Cossins, ‘Time Out for Longman: Myths, Science and the Common Law’ (2010) 34:1 Melbourne University Law Review 69. [PDF]

Justice Peter Applegarth

Justice Peter Applegarth

Justice Peter Applegarth

Summary: Justice Applegarth reflected on the talks before him and discussed the challenges in introducing reliable social framework evidence into court.

Photo credits go to Nadine Davidson-Wall.

For the full photo album, go here

Some notes on Constructing Evidence and Educating Juries: The Case for Modular, Made-In-Advance Expert Evidence about Eyewitness Identifications and False Confessions (Jennifer L Mnookin)

At the recommendation of one of the editors of the Osgoode Hall Law Journal, I recently read Jennifer Mnookin’s excellent article on “modular” expert evidence. In our forthcoming paper, Will Crozier and I suggested that expert evidence about false confessions and the fallibility of eyewitness memory is excluded on the basis of a misunderstanding of human psychology. In short, Canadian courts deem expertise unnecessary because it simply duplicates the knowledge and experience and the factfinder. We disagree: people are often not aware of how their memory works and how strong the impact of the situation is on their behaviour (including those forces that produce false confessions).

Read More

R v Livingston: Bias, I presume?

Canadian courts are increasingly interested in the bias (and partiality and non-independence) of expert witnesses. An Ontario trial court’s recent decision in Livingstone to exclude a computer expert is an excellent example of that trend (or what anecdotally seems like a trend). In this note, I’ll go over Livingston and try to explain its significance.

Read More

R v Comeau: Who decides history?

By now, the Supreme Court of Canada’s boozy federalism decision in R v Comeau is old news. And – no doubt – many important things can be said about Comeau and cooperative federalism, originalism, and precedent. My interest, however (and not surprisingly), is in the expert evidence issues it contains. Most notably, Comeau raises important issues about the factual determination of history in courtrooms and the roles of judges and expert witnesses in that task. In Comeau, I think these issues could have been handled a lot better.

Read More