In a recent post, I promised to do a review of this promised-to-be-published “paradigm proof” from a Marquette University news release which revealed a “silver bullet” forensic cure to bite marks’ spotty history. The caption for the news release is eye-catching:
“Work will help establish protocol for using bite marks as forensic evidence “
Here are more declarations.
“will help establish protocol for using bite marks as forensic evidence” and that it would “determine whether bite-mark patterns have evidentiary value in criminal investigations.”
Neither of these claims, above or below, are addressed or established in the final report.
“The group’s findings have demonstrated that bite patterns can sometimes be recovered from skin and correlated with a high degree of probability to a member of the population – a significant step forward for forensic odontologists who have long been challenged on the scientific legitimacy of bite mark analysis.”
Furthermore the press release states that the study
“will also provide a template that enables imaging specialists to evaluate and rank bite patterns against a benchmark to determine their evidentiary value.”
There is no mention of such a benchmark in the final report. And many oddly placed disclaimers:
“But Johnson cautions that such patterns cannot be used as an identification to the exclusion of all others.” (sic)
“It’s not identification where you’re individualizing,” Johnson explained. “It’s showing that certain characteristics of teeth are outliers, and when you see one of these, you can eliminate a great percentage of the population. And if you see a second one, or a third one, then it’s not likely that very many people in the world are going to have teeth like that.” (sic)
“The study is meant to augment the established guidelines of the American Board of Forensic Odontology. It should not be used in testimony or legal proceedings” (Page 18)
Here’s my preliminary comment. “Caveat emptor.” In the above quotations, the news release and the primary author’s jumbled terminology and logic sets the stage for this review.
“Identification,” in forensic science, can only be done by methods that confirm an object of evidence can be sourced to its origin. Think of DNA profiling and fingerprints (with a few mistakes of their own).
“Seeing” a population cohort with “only one or two or three” dental “outliers” (meaning crooked teeth, I suppose) is no proof human skin will reliably reflect this “rare” (using their terms) arrangement. It fails to establish a clear means of how to control mismatches (more on that below). The reporters also want government managed crime labs to take this pig skin study (“the platform”) and rerun their own experiments using their “template.”
So, here’s their evidence for those interested in reading 104 pages of hyperbole. It also is asking for additional funding of a “bite mark database.” This would be akin to the FBI’s successful FDDU (Federal DNA Database Unit). Please help us all. Highlighted_Johnson_Radmer_NIJ_2010-DN-BX-K176__10_13_13 (5)
A brief overview about forensic science research and academic standards
Progress in our popular concepts of scientific advancement is the meat of books, movies, school history lessons and exhibitions within major museums and universities. Forensic science has a significant footing in crime, murder and mayhem fiction from such notables as Edgar Allen Poe, Mary Shelly, Arthur Conan Doyle and more recently Patricia Cornwell and Kathy Reichs.
Has anybody written about botched forensic lab experimentation and misleading press releases?
Of course. The years long flap about “hair matching” and “bullet lead matching” of the FBI is still bouncing around in some of our brains. Maybe the newly formed National Commission on Forensic Science can deconstruct the bad management and bogus research that led to almost 20 years of junk expert testimony in thousands of criminal cases. Then maybe the lawyers in the NCFS can comb the judicial record and find out how trial judges and appellate panels allowed acceptance of the hair and bullet experts over the objections of the defense bar and a lot of bad press. This might result in preventive measures being recommended by the Commission and globally applied (via an enforceable mandate) across this nation’s law enforcement affiliated crime labs and their certifiers such as the ASCLD (and a few others).
History, in its looking backwards, tends to smooth over the bumps in the roads of real world of science in such a way as to portray it ascending upwards or at least forward. But, that’s not the entire reality. Science does develop from ideas, suppositions, personal observations and, in the academic environment, experimentation and publication. Publishing is the heartland for professors to advance to glory or as in this case, redemption. This last part about publication has a nasty side worth short discussion. No one less than Randy Scheckman, a 2014 Nobel Laureate in physiology or medicine, used his award to mention his take on some “behind the scenes” in journal publishing.
Leading academic journals are distorting the scientific process and represent a “tyranny” that must be broken, according to a Nobel prize winner who has declared a boycott on the publications.
A journal’s impact factor is a measure of how often its papers are cited, and is used as a proxy for quality. But Scheckman said it was “toxic influence” on science that “introduced a distortion”. He writes: “A paper can become highly cited because it is good science – or because it is eye-catching, provocative, or wrong.”
His statements are surely polarizing, and not on point with this blog’s issue, but they ring true. If you remember about “cold fusion.” As I said earlier, science isn’t inviolate, its progress can be choppy and some peer reviewers may not have sufficient grounding in all the myriad specialties in the VERY specialized sciences of today.
Philip Campbell, the editor-in-chief for Nature, later countered with….
“We select research for publication in Nature on the basis of scientific significance. That in turn may lead to citation impact and media coverage, but Nature editors aren’t driven by those considerations, and couldn’t predict them even if they wished to do so,” he said.”
Cold fusion might have been ultimately ridiculed but it’s the process of its being “ruled out’ was a good corollary about the scientific method. The initial claim was written in such a way to allow “replication” or “validated” by independent institutes, universities and legitimate researchers. This is the “communalism” of science. It’s a sharing process. A paper claiming first rights to something “good” presents sufficient testing design, instrumentation use, data collection and analysis that represents what is the basis of the research claims. Then there is the questionable side of research.
Please put on your “critical thinking hat.” Any report consistently containing data representation must be proven to be intentionally inaccurate and patently misreported. There is no evidence that this has ever been proven or ever led to a finding of misconduct. In no, way am I affirming anything in that regard. But, in arguendo, academics do consider it data suppression—the failure to publish or discuss significant findings due to the results being adverse to the interests of the researcher or his/her sponsor. Within this are bare assertions– making entirely unsubstantiated claims.
Review of “Replication of Known Dental Characteristics 1 in Porcine Skin: Replication of Known Dental Characteristics in Porcine Skin: Emerging Technologies for the Imaging Specialist”
In the press release for his $715,000 three-year study, Dr. LT. Johnson states that this work will further the scientific basis for bite mark analysis – his actual final report, that describes the project and the results, paints a very different picture.
This study actually found that:
1.) It was difficult to create a bite mark suitable for analysis almost 50% of the time.
2.) There was an incredibly low accuracy rate of determining the actual biter
3.) There were a large number of false positives
Let’s look at the points in a little more detail:
1.) This study demonstrated that bite marks cannot be reliably created even under ideal laboratory conditions –with enough fidelity suitable for research analysis almost 50% of the time. Out of the 200 bite marks created only between 43.5% and 58% had sufficient detail such that they could be used. (Page 13).
Thus with an anesthetized pig, which is not moving, and a mechanical jaw, which does not possess the complexity of a true human jaw, good quality bitemarks could not be produced on many occasions. One must wonder what the quality of the evidence would be in a far less conservative situation during a violent altercation.
In other words, ‘real life’ bitemark analysis would have an even worse error rate.
2.) Fifty plastic models of dentitions were created. Each model was used to make four bites in different locations on twenty-five pigs, totaling two hundred bites.
Note that the sample size is N=50 bites, with 4 replications, it is not N=200. (Page 1)
It is not disclosed whether any of the correct bitemark/dentition assignments were replications (which would worsen the results). For example, if a particular dentition had a feature that made it easier to match such as a missing tooth, up to four replicated bitemarks might have been reported as matches to a single dentition.
A software tool was used to measure the position of the front four teeth in the bitemarks. These measurements were then compared to 469 measurements of dental models, in an effort to match models to bitemarks.
Two examiners analyzed the bitemarks. Examiner 1 was able to rank 5 out of 143 bites as number 1 and Examiner 2 was able to rank 2 out of 156 as number 1. That is a 0.035% and 0.0128% accuracy rate respectively. (Page 2) Again it is not mentioned if any of these matches were replicates from the same dentition.
The data indicate that it was next to impossible to correctly identify the biting source.
3.) Images were then ranked in the top 1%, 5%, and 10%. The only relevant data here is selection of the correct specimen, however. If other specimens are ranked within the top percentiles, then this research shows that there are significant chances of both false negatives and false positives, i.e., other models fit the bitemark better than the target. False positive rate is not reported in this study, although this obviously occurred. (Page 15).
Here is a summary of the ranking stating:
“in more than 20% of the Samples in this study, the Distance Metric Model finds the Target within the closest 5% of the Population. In more than 6% of the Samples, it finds the Target within the closest 1% of the Population”
Put another way, in 80% of the samples in this study, the model could not find the target within the closest 5% of the population, and in 94% of the samples the target could not be found within the closest 1%.
This is the largest bitemark study yet performed on living organisms confirms that bite mark analysis is still notoriously unreliable.
Their null hypothesis states: “It is not possible to replicate bite mark patterns in porcine skin, nor can these bite mark patterns be scientifically correlated to a known population data set with any degree of probability.”
The null hypothesis has been proven. The data supports it. This methodology should not be accepted, nor replicated by anyone. The real egregious oversight in this paper is its silence (i.e. omission) about false positives (causes of wrongful convictions: bad for the public’s trust in criminal justice) and false negatives (bad for public safety).