Featured Post

Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...

Saturday, February 29, 2020

A D.C. judge issues a much-needed opinion on ‘junk science'

Radley Balko is at it again. This time, the focus of his attention is a ruling is tool-mark analysis.

"This brings me to the September D.C. opinion of United States v. Marquette Tibbs, written by Associate Judge Todd E. Edelman. In this case, the prosecution wanted to put on a witness who would testify that the markings on a shell casing matched those of a gun discarded by a man who had been charged with murder. The witness planned to testify that after examining the marks on a casing under a microscope and comparing it with marks on casings fired by the gun in a lab, the shell casing was a match to the gun.

This sort of testimony has been allowed in thousands of cases in courtrooms all over the country. But this type of analysis is not science. It’s highly subjective. There is no way to calculate a margin for error. It involves little more than looking at the markings on one casing, comparing them with the markings on another and determining whether they’re a “match.” Like other fields of “pattern matching” analysis, such as bite-mark, tire-tread or carpet-fiber analysis, there are no statistics that analysts can produce to back up their testimony. We simply don’t know how many other guns could have created similar markings. Instead, the jury is simply asked to rely on the witness’s expertise about a match."

As noted in the previous post, the "pattern matching" comparisons are prone to error when an appropriate sample size is not used as a control.

The issue, as far as statistics are concerned, is not necessarily the observations of the analyst but the conclusions. Without an appropriate sample, how does one know where the observed results would fall within a normal distribution? Are the results one is observing "typical" or "unique?" How you would know? You would construct a valid test.

Balko's point? No one seems to be doing this - conducting valid tests. Well, almost no one. I certainly do - conduct valid tests, that is.

If you're interested in what I'm talking about and want to learn more about calculating sample sizes and comparing observed results, sign up today for Statistics for Forensic Analysts (link).

Have a great weekend, my friends.

No comments: