The war of words about what is ‘reliable’ about forensic opinions is getting stretched and compressed depending on the source of comment.
The fingerprint corp called the IAI had its’ president put a 3/4 page blast out about his ‘standing’ strong on the IAI foot print methods. Here it is for those so interested in it. The President’s Council on Sci put out a 179 page report. This IAI person got exercised about it enough to barely fill a single typed page, so it must have been written at night.
Here is this picture of him in case you might run into him at some meeting. Meet Harold Ru[slander]. This is an example of ‘compressed” to say the least. No errors, period.
So, enough hilarity for now. The judicial machinations on forensic reliability is apparently being twisted into what this ex-lawyer considers to be ‘acceptable error.’ By a judge, remember. To be judged later by more judges.
Here is Grits for Breakfast presenting a question being asked and answered by judges all the time.
“What’s a ‘good error rate’ in non-science forensics testimony?” Harold thinks its aok. Of course, the IAI self-publishes all its ‘scientific’ findings.
What error rate would justify excluding non-science-based forensics?
[excerpt]
A recent report from the President’s Council of Advisers on Science and Technology renewed concerns first raised by the National Academy of Sciences in 2009 about the lack of scientific foundation for many if not most commonly used forensics besides DNA and toxicology. Our friends at TDCAA shared on their user forum a link to the first federal District Court ruling citing the PCAST report, focused in this instance on ballistics matching.
The federal judge out of Illinois admitted ballistics evidence despite the PCAST report because he considered estimated false-positive rates relatively low. Here’s the critical passage on that score:
PCAST did find one scientific study that met its requirements (in addition to a number of other studies with less predictive power as a result of their designs). That study, the “Ames Laboratory study,” found that toolmark analysis has a false positive rate between 1 in 66 and 1 in 46. Id. at 110. The next most reliable study, the “Miami-Dade Study” found a false positive rate between 1 in 49 and 1 in 21. Thus, the defendants’ submission places the error rate at roughly 2%. The Court finds that this is a sufficiently low error rate to weigh in favor of allowing expert testimony. See Daubert v. Merrell Dow Pharms., 509 U.S. 579, 594 (1993) (“the court ordinarily should consider the known or potential rate of error”); United States v. Ashburn, 88 F. Supp. 3d 239, 246 (E.D.N.Y. 2015) (finding error rates between 0.9 and 1.5% to favor admission of expert testimony); United States v. Otero, 849 F. Supp. 2d 425, 434 (D.N.J. 2012) (error rate that “hovered around 1 to 2% ” was “low” and supported admitting expert testimony). The other factors remain unchanged from this Court’s earlier ruling on toolmark analysis.
Using a 2 percent error rate could understate things: The error rates from the studies he cited ranged from 1.5 to 4.8 percent, so it could be twice that high (1 in 21). Still, I’m not surprised that some judges might consider an error rate of 1.5 to 4.8 percent acceptable. And the judge is surely right that the PCAST report provides a new basis for cross-examining experts and reduces the level of certainty about their findings which experts can portray to juries, so that’s a plus.
Full article where Grits makes some more good points.