To the real story. Its about…….crime labs and courts self-regulating themselves on scientific issues rather than the 2016 PrEsiDEnTiAl ELEctiOn.
All those “selfie” factors on the left are spot on. I wish for a figure 2 to measure the incidence of errors within a self-regulated industry, such as the criminal justice industry. Wait, try this one below, which reflects what PCAST, knowing the forensic’s industry exemplary values, expected to facilitate. Apparently, we haven’t got to a consensus on step one (far left box) as yet. Seven years after it all this started.
I’ve written a bit on the PCAST/NAS (2009) continued messaging to the forensic ‘matching’ police sciences who live partly in the past (i.e. case law precedent of admissibility) and strongly favor it when outsiders tread in their direction. Federal Rules of Evidence interpretations, however, are becoming more responsive to the “concept” of validation testing vs just listening to a bunch of people who agree with themselves due to employment needs. It is developing into a ‘trickle-down’ type of thing.
These ‘treaders’ have repeatedly explained that wrongful convictions have been aided by some of these “comparison-method” believers. Hair, biter, various arson, blood and other patterns, and lead ballistics rank heavily in the misuse, misapplication and overbearing confidence levels that police experts developed for legal, not scientific, presentations. The only self-regulated exclusion of flawed comparison-methods are with hair and bullet-lead composition when the FBI eventually admitted its’ examiners over-sold their wares.
Its ironic that they all, at one time or another, were cloaked with “cutting-edge” assurances currently in use against PCAST.
Speaking of the forensic science industry in its entirety, all the numerous forensic commissions may be having little systemic effect. Forensics is/are still unregulated by any umbrella entity immune to political influences from all these multiple stakeholders. Just look at how the AAFS recently passes the buck by declining substantive input on PCAST questions and suggestions promoting better scientific veracity.
Please note that, in the attached judge ruling mentioning and dismissing the PCAST opinion on ballistic ‘matching’ as being only a “forecast” of suggested forensic improvements, he lays his ruling denying exclusion of the “toolmark” (read as ‘ballistics’) issue of the case with this reasoning………..
“PCAST did find one scientific study that met its requirements (in addition to a number of other studies with less predictive power as a result of their designs). That study, the “Ames Laboratory study,” found that toolmark analysis has a false positive rate between 1 in 66 and 1 in 46. Id. at 110. The next most reliable study, the “Miami-Dade Study” found a false positive rate between 1 in 49 and 1 in 21. Thus, the defendants’ submission places the error rate at roughly 2%. The Court finds that this is a sufficiently low error rate to weigh in favor of allowing expert testimony.”
Two studies.
Only part of forensic solutions offered from PCST/NAS are the development of known ‘error rates.’ This is nothing new as this rhetoric was born into the US legal system by the 1993 Daubert trilogy.
Read the three page ruling and see what else the judge misses. Here’s a hint:
“Questions about the strength of the inferences to be drawn from the analysis of the examiners presented by the government may be addressed on cross-examination.”
I think the Babylonians and Greeks developed the principle of cross-ex which: “………allude[s] to the almost supernatural power of the experienced trial lawyer-the power to confront and break the false witness.”