Featured Post

Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...

Tuesday, December 10, 2019

Alternative Explanations?

I've sat through a few sessions on the Federal Rules of Evidence. Rarely does the presenter dive deep into the Rules, providing only an overview of the relevant rules for digital / multimedia forensic analysis.

A typical slide will look like the one below:


Those who have been admitted to trial as an Expert Witness should be familiar with (a) - (d). But, what about the Advisory Committee's Notes?

Deep in the notes section, you'll find this preamble:

"Daubert set forth a non-exclusive checklist for trial courts to use in assessing the reliability of scientific expert testimony. The specific factors explicated by the Daubert Court are (1) whether the expert's technique or theory can be or has been tested—that is, whether the expert's theory can be challenged in some objective sense, or whether it is instead simply a subjective, conclusory approach that cannot reasonably be assessed for reliability; (2) whether the technique or theory has been subject to peer review and publication; (3) the known or potential rate of error of the technique or theory when applied; (4) the existence and maintenance of standards and controls; and (5) whether the technique or theory has been generally accepted in the scientific community. The Court in Kumho held that these factors might also be applicable in assessing the reliability of nonscientific expert testimony, depending upon “the particular circumstances of the particular case at issue.” 119 S.Ct. at 1175.

No attempt has been made to “codify” these specific factors. Daubert itself emphasized that the factors were neither exclusive nor dispositive. Other cases have recognized that not all of the specific Daubert factors can apply to every type of expert testimony. In addition to Kumho, 119 S.Ct. at 1175, see Tyus v. Urban Search Management, 102 F.3d 256 (7th Cir. 1996) (noting that the factors mentioned by the Court in Daubert do not neatly apply to expert testimony from a sociologist). See also Kannankeril v. Terminix Int'l, Inc., 128 F.3d 802, 809 (3d Cir. 1997) (holding that lack of peer review or publication was not dispositive where the expert's opinion was supported by “widely accepted scientific knowledge”). The standards set forth in the amendment are broad enough to require consideration of any or all of the specific Daubert factors where appropriate."

Important takeaways from this section:
  1. (1) whether the expert's technique or theory can be or has been tested—that is, whether the expert's theory can be challenged in some objective sense, or whether it is instead simply a subjective, conclusory approach that cannot reasonably be assessed for reliability;
  2. (2) whether the technique or theory has been subject to peer review and publication;
  3. (3) the known or potential rate of error of the technique or theory when applied; 
  4. (4) the existence and maintenance of standards and controls; and 
  5. (5) whether the technique or theory has been generally accepted in the scientific community.
As to point 1, the key words are "objective" and "reliability." Part of this relates to a point further on in the notes - did you adequately account for alternative theories or explanations? More about this later in the article.

To point 2, many analysts confuse "peer review" and "technical review." When you share your report and results in-house, and a coworker checks your work, this is not "peer review" This is a "technical review." A "peer review" happens when you or your agency sends the work out to a neutral third party to review the work.See ASTM E2196 for the specific definitions of these terms. If your case requires "peer review," we're here to help. We can either review your case and work directly, or facilitate a blind review akin to the publishing review model. Let me know how we can help.

To point 3, there are known and potential error rates for disciplines like photographic comparison and photogrammetry? Do you know what they are and how to find them? We do.  Let me know how we can help.

To point 4, either an agency has their own SOPs or they follow the consensus standards found in places like the ASTM. If your work does not follow your own agency's SOPs, or the relevant standards, that's a problem.

To point 5, many consider "the scientific community" to be limited to certain trade associations. Take LEVA, for example. It's a trade group of mostly government service employees at the US/Canada state and local level. Is "the scientific community" those 300 or so LEVA members, mostly in North American law enforcement? Of course not. Is "the scientific community" inclusive of LEVA members and those who have attended a LEVA training session (but are not LEVA members)? Of course not. The best definition "the scientific community" that I've found comes from Scientomony. Beyond the obvious disagreement between how "the scientific community" is portrayed at LEVA conferences vs. the actual study of the scientific mosaic and epistemic agents, a "scientific community" should at least act in the world of science and not rhetoric - proving something as opposed to declaring something.

Further down the page, we find this:

"Courts both before and after Daubert have found other factors relevant in determining whether expert testimony is sufficiently reliable to be considered by the trier of fact. These factors include:

  • (3) Whether the expert has adequately accounted for obvious alternative explanations. See Claar v. Burlington N.R.R., 29 F.3d 499 (9th Cir. 1994) (testimony excluded where the expert failed to consider other obvious causes for the plaintiff's condition). Compare Ambrosini v. Labarraque, 101 F.3d 129 (D.C.Cir. 1996) (the possibility of some uneliminated causes presents a question of weight, so long as the most obvious causes have been considered and reasonably ruled out by the expert)."
This statement speaks to the foundations of one's conclusions. In performing a photographic comparison, how many common points between the "known" and the "unknown" equals a match? On what do you base your opinion? SWGIT Section 16 has a nice chart to guide the analyst in explaining the strength of one's conclusion. Those that used this often had a checklist, combined with threshold values - so many common points equals a match.

But can one account for obvious alternative explanations if the quality / quantity of data is minimal? Of course not. This goes to the comedic statement often made, "these five white pixels do not make a car." The obvious alternative explanations are every other car on the planet, or in the greater region if you will. 

When I perform a "peer review," the work encompasses not only the technical / scientific aspects of the package of data, but also the rules of evidence. If you failed to account for alternative explanations, I'll let you know. If your report is long on rhetoric and short on science, I'll let you know. If you're an attorney and you're wondering about the other side's work, I'll let you know. In performing this service, it's based in science, standards, and the law - not on rhetoric or my personal standing in the scientific community.

If you need help with a case,  Let me know how we can help. We're open for business.

No comments: