Featured Post

Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...

Saturday, February 29, 2020

A D.C. judge issues a much-needed opinion on ‘junk science'

Radley Balko is at it again. This time, the focus of his attention is a ruling is tool-mark analysis.

"This brings me to the September D.C. opinion of United States v. Marquette Tibbs, written by Associate Judge Todd E. Edelman. In this case, the prosecution wanted to put on a witness who would testify that the markings on a shell casing matched those of a gun discarded by a man who had been charged with murder. The witness planned to testify that after examining the marks on a casing under a microscope and comparing it with marks on casings fired by the gun in a lab, the shell casing was a match to the gun.

This sort of testimony has been allowed in thousands of cases in courtrooms all over the country. But this type of analysis is not science. It’s highly subjective. There is no way to calculate a margin for error. It involves little more than looking at the markings on one casing, comparing them with the markings on another and determining whether they’re a “match.” Like other fields of “pattern matching” analysis, such as bite-mark, tire-tread or carpet-fiber analysis, there are no statistics that analysts can produce to back up their testimony. We simply don’t know how many other guns could have created similar markings. Instead, the jury is simply asked to rely on the witness’s expertise about a match."

As noted in the previous post, the "pattern matching" comparisons are prone to error when an appropriate sample size is not used as a control.

The issue, as far as statistics are concerned, is not necessarily the observations of the analyst but the conclusions. Without an appropriate sample, how does one know where the observed results would fall within a normal distribution? Are the results one is observing "typical" or "unique?" How you would know? You would construct a valid test.

Balko's point? No one seems to be doing this - conducting valid tests. Well, almost no one. I certainly do - conduct valid tests, that is.

If you're interested in what I'm talking about and want to learn more about calculating sample sizes and comparing observed results, sign up today for Statistics for Forensic Analysts (link).

Have a great weekend, my friends.

Tuesday, February 25, 2020

Sample Size? Who needs an appropriate sample?

Last year, I spent a lot of time talking about statistics and the need for analysts to understand this important science. Ive written a lot about the need for appropriate samples, especially around the idea of supporting a determination of "match" or "identification."

Many in the discipline have responded essentially saying, it is what it is - we don't really need to know about these topics or incorporate these concepts in our practice.

Now comes a new study from Sophie J. Nightingale and Hany Farid, Assessing the reliability of a clothing-based forensic identification. If you've been to one of my Content Analysis classes, or one of my Advanced Processing Techniques sessions, reading the new study won't yield much new information from a conceptual standpoint. It will, however, lend a bunch of new data affirming the need for appropriate samples and methods when conducting work in the forensic sciences.

From the new study: "Our justice system relies critically on the use of forensic science. More than a decade ago, a highly critical report raised significant concerns as to the reliability of many forensic techniques. These concerns persist today. Of particular concern to us is the use of photographic pattern analysis that attempts to identify an individual from purportedly distinct features. Such techniques have been used extensively in the courts over the past half century without, in our opinion, proper validation. We propose, therefore, that a large class of these forensic techniques should be subjected to rigorous analysis to determine their efficacy and appropriateness in the identification of individuals."

The important thing about the study is that the authors collected an appropriate set of samples to conduct their analysis.

Check it out and see what I mean. Notice how the results develop from the samples collected. See how they differ from an examination of a single image. Thus, I always say, under a certain sample size, you're better off flipping a coin.

If, after reading the paper, you're interested in increasing your knowledge of statistics and experimental science, feel free to sign-up for Statistics for Forensic Analysts.

Have a great day, my friends.