This is a repost of my June 5, 2013 blog on the important subject of bias in forensic science Added comment: The expanding revelations of extraneous influences that can plague forensic experts in court has added to the existing literature on the subject. Crime lab educators use terms like bias control, double blind precautions, multiple “independent” examiner precautions, and ethical concepts in their curricula. This preparation is admirable but practice application is subject to significant variations dependent on employment relationships throughout their students’ careers. The following journal article covers the general issues very well. In addition, today’s Radley Balko post on the AGITATOR goes further to underscore the cultural and financial climate in some police managed crime labs that sway results in an interesting manner.
A REVIEW OF
The Forensic Confirmation Bias: Problems, Perspectives, and Proposed Solutions
Saul M. Kassina, Itiel E. Drorb, and Jeff Kukuckaa have published the above-titled article in the Journal of Applied Research in Memory and Cognition and published by Elsevier Ltd. in the Journal of Applied Research in Memory and Cognition.
The paper establishes the FBI fingerprint fiasco involving misidentification of fingerprints in the notorious Brandon Mayfield case as its linchpin to argue that previously staid forensic disciplines considered by most to be infallible possesses are in reality a confabulation of unreliability. The hard hitting FBI experts on the Mayfield case certainly cannot be considered “bad apples” of forensic expertise. The factors the authors discuss in the realm of mistake making expands from the FBI to other forensic comparison disciplines noted by the National Academies of Science 2009 report as fraught with examiner subjectivity and inconsistency. Clearly reviewed as under this aegis of professed infallibility were the “I think the evidence is similar to the evidence found at the crime scene (or on the victim) stalwarts of……”toolmarks and firearms; hair and fiber analysis; impression evidence (i.e. bite marks);blood spatter; handwriting; and even fingerprints, until recently considered infallible.”
The work is a concise review of the appropriate scientific literature and the few cognition studies that examined the forensic examiners. The authors add to the lexicon of interpretative influences by coining “forensic confirmation bias” as the class of influences examiners bring to the CSI scene and their labs via previous experiences and cognitive mindsets.
The abstract states:
As illustrated by the mistaken, high-profile fingerprint identification of Brandon Mayfield in the Madrid Bomber case, and consistent with a recent critique by the National Academy of Sciences (2009), it is clear that the forensic sciences are subject to contextual bias and fraught with error. In this article, we describe classic psychological research on primacy, expectancy effects, and observer effects, all of which indicate that context can taint people’s perceptions, judgments, and behaviors. Then we describe recent studies indicating that confessions and other types of information can set into motion forensic confirmation biases that corrupt lay witness perceptions and memories as well as the judgments of experts in various domains of forensic science. Finally, we propose best practices that would reduce bias in the forensic laboratory as well as its influence in the courts.
“In many forensic disciplines, the human examiner is the main instrument of analysis. It is the forensic expert who compares visual patterns and determines if they are “sufficiently similar” to conclude that they originate from the same source (e.g., whether two fingerprints were made by the same finger, whether two bullets were fired from the same gun, or whether two signatures were made by the same person). However, determinations of “sufficiently similar” have no criteria and quantification instruments; these judgments are subjective. Indeed, a recent study has shown that when the same fingerprint evidence is given to the same examiners, they reach different conclusions approximately 10% of the time (Ulery, Hicklin, Buscaglia, & Roberts, 2012). Dror
et al. (2011) have shown not only that the decisions are inconsistent but that even the initial perception of the stimulus, prior to
comparison, lack inter- and intra-expert consistency.”