Featured Post

Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...

Wednesday, September 18, 2019

Ipse dixit

Today's phrase is "ipse dixit."

Ipse dixit (Latin for "he said it himself") is an assertion without proof; or a dogmatic expression of opinion. It's a fallacy of defending a proposition by baldly asserting that it is "just how it is," thus distorting the argument by opting out of it entirely: the claimant declares an issue to be intrinsic, and not changeable.

Today's phrase is relevant as I received a reply to my enquiries about LEVA's certification programs and Grant Fredericks' testimony (link).

I sent the inquiry to LEVA's certification board for two reasons, seeking to clarify in my mind what was said and attempting to reconcile this new information with existing information across several professional domains. The accreditation of certification bodies is something that the National Commission on Forensic science examined and supported (link).
  1. The OSAC has compiled a list of certifications for each discipline. One of the categories on the list deals with a certification program's accreditation.
  2. Professionally, I and my firm perform third-party / independent verification services for digital / multimedia forensic science practitioners. I/we are the "V" in ACE-V. Thus, in evaluating customer reports, processes, CVs and etc., it's important to have the facts of the relevant data points.
The reply to my request should have been rather straight forward. Either LEVA's program is accredited or it's not. If it is, there should be some form of proof - certificate / number / etc. - from the accrediting body. As an example, the AVFA certification from ETA-I is accredited by the International Accreditation Certification Council (link). Before sitting for the AVFA certification, I called the ETA-I and asked about their accreditation scheme. Then, I called the ICAC to verify what I was told. Then I checked a bit more into the ICAC and their accreditation regime. I found all involved in this process to be refreshingly transparent.

In terms of transparency, here's the reply that I received from LEVA's certification committee:

"Thanks for taking the time to ask for clarification regarding Grant's testimony about LEVA certification and training accreditation and UIndy.  We're glad to provide the following.

After several others and I reviewed the video, it seemed to us Grant was not referencing LEVA's certification program being accredited by UIndy. He went on to say seconds later that the LEVA training was accredited and that is true.  At time 20:35. He says it quite clearly.  

LEVA's certification program was unveiled in January 2004.  The University of Indianapolis started accrediting the three LEVA core classes in 2006. 

In an article written by Thomas C. Christenberry, then Director of Public Safety Education, School for Adult Learning, University of Indianapolis, he stated: "A significant benefit for LEVA is the academic association with the University of Indianapolis. LEVA developed three core courses: Level 1 – Forensic Video Analysis and the Law, Level 2 – Digital Multimedia Evidence Processing, and Level 3 – Advanced Forensic Video Analysis and the Law. Each of the core courses has been reviewed by the University of Indianapolis’ School for Adult Learning, which approved the awarding of Continuing Education Units (CEUs) for successful completion of the courses. Some students request university credit for the courses completed, and the University of Indianapolis has approved these core courses for credit."[2]"

Having not exactly received an answer, I attempted to clarify and sent the following in reply:

"Thanks for the response. 

If I’m understanding your reply, including the quote from Christenberry, the LEVA 1-3 courses were (or are?) approved by U Indy for CEUs. Approving the courses for the issuance of CEUs is analogous to LEVA approving my courses for credit towards the initial / continuing educational requirements for LEVA certification. With the end of the relationship, and the end of the program for which Cristenberry was responsible, the assumption would be that U Indy is no longer issuing CEUs for LEVA’s courses. If LEVA isn’t receiving this service from U Indy, I would imagine that the ASTM’s service would be a logical replacement (https://www.iacet.org/ap/104421/)? I’m well familiar with the model as the LAPD worked with Los Angeles City College to issue CEUs for the LAPD’s internal training courses. My POST Photoshop course was part of that regime at the time. By the time one finished all the mandatory LE training, you could almost receive an AS in Criminal Justice.

Perhaps, a way of looking at the statements made by Grant is either he mis-spoke, or misunderstood the relationship and services offered by U Indy as clearly the issuance of CEUs is not an accreditation of a training program. The training landing page on the LEVA web site lists neither a current accreditation nor how to apply for CEUs for the various training events (https://leva.org/index.php/training). If you have a certificate or other proof of current accreditation of LEVA’s program, please do share. I would like to update the OSAC with this information."

To which I received the following:

"Thanks, we appreciate your input and information."

Yes. That's it.

Thus, if one believes the LEVA certification program to be "accredited," it must only be by ipse dixit as there is noting on the LEVA web site, and nothing in their response to me that would indicate otherwise.

Thus it is that I will continue to advise clients who's staff are LEVA certified to refrain from stating that the certification program is accredited in either their testimony or their reports. Also, without anything to the contrary, the OSAC's list will remain unchanged.

Have a good day my friends.

Tuesday, September 17, 2019

Generic Conditionals

In my retirement from police service, I'm busier than ever. One of the projects that I'm involved with is the creation of an instructional program in report writing for a national police service. In defining the instructional problem, I've found that the learner population has a problem with "factual conditionals." I've also noted this problem in the report writing samples of their forensic science practitioners.
Because people have problems with the relationship between the dependent and independent clauses, their reports are hard to read and interpret. What should be a clear statement - (dependent variable / action) resulted in (independent variable / result) - is often a confusion of meanings.

Often, what should be written as a conditional is written as a declarative statement. This problem hides potential meanings, and obscures avenues for inquiry.

For example, one of the sample videos that I use in my Content Analysis class (link) examines a traffic collision scene. A collision occurs as V1 attempts to turn left whilst exiting a parking garage. In the declarative statement, fault is obvious - turning left eludes to issues of right of way. What is missing is the conditional. If V1's progress is purposefully impeded, then the inquiry turns from a simple traffic collision to a "staged collision," - an entirely different line of inquiry.

When the responding officer records the statements of those involved, as well as witnesses, it becomes important to consider the statements in a "conditional sense." If Person 1's statement is true, then the scene would be arrayed thus." or "If Person 2's statement is true, then Person 1's statement is untrue." The conditional statements help frame the analysis of the statements and the evidence.

Using an example from the weekend's posts, "If headlight spread pattern analysis is a subset of digital / multimedia forensic science (comparisons), then the analysis must examine the recording of the pattern and not the pattern itself."

Just something to ponder on this beautiful Tuesday morning. Have a great day, my friends.

Monday, September 16, 2019

Fusion-based forensic science

When I first started out in multimedia forensics, there were just a few vendors offering tools to practitioners. Ten years later, there were a whole bunch. A further ten years later, and there are a relatively small group of vendors again.

When I sat down with Dr. Lenny Rudin of Cognitech a few weeks ago, to catch up and see what's new with him, we took that walk down memory lane that is customary for the "old guys." We talked all things Cognitech, and the state of the market.

I asked him about "codec / format support" in his tools and it seems that he favors the approach taken by the folks at Input-ACE - or the other way around I guess, since Cognitech has been around much, much longer than Input-Ace. The approach, seize the device and process the hard drive directly, is where Input-Ace is going in their partnership with Cellebrite and DME Forensics. Cognitech is there as well, seemingly with SalvationData.

This fusion-based approach makes sense. Do what you do, well, and let others do their thing well - then partner with them. It makes sense.

On the other end of the spectrum is Amped SRL. I remember standing in the exhibit space at LEVA Asheville when a certain developer was trying to offer his services (even his IP) to Amped. The response he received was that everything that is necessary to the task should be in FIVE. No plug-ins. No extra programs. That, and that Amped was happy with their current staff and development pace. In that conversation, Amped noted that FIVE should be the "go-to" for analysts. For a few years, this was the case. Now, increasingly, it's not.

Is everything that is necessary for an analyst's work in FIVE? Currently, there are features available in DVRConv that aren't available in FIVE. The main one that most customers wanted (when I was fielding calls in the US for Amped Software, Inc.) is the ability to "convert" everything. You see, in DVRConv, you just drag/drop a bunch of files and folders into the interface and it "converts" everything that's convertible - and maintains folder structure. FIVE, on the other hand, only "converts" one file type at a time, one folder at a time. The new "change frame rate on conversion" functionality is another example of something that is done in DVRConv that isn't done in FIVE.

Then, there's Reply. It seems that rather than continue to develop FIVE, Amped is cannibalizing it. Is Redaction moving over to Replay? It seems so. Yes, Replay now has improved tracking functions. But, what about all of those customers who bought FIVE for redaction. How are they feeling right now knowing that promises made about future functionality weren't promises kept. Promises made that weren't kept, you ask? Yes. Amped has yet to make good on their promise, made at LEVA Scottsdale, of adding audio redaction functionality to FIVE. Perhaps audio redaction will make it into Replay. Perhaps not.

Amped was once known as the provider of "specialists'" tools. Now, it's "generalizing" them as it migrates FIVE's functionality to Replay. A lot of US based agencies bought five years worth of support and upgrades when they purchased their original licenses. I'm wondering how that ROI looks now, given Amped's latest moves.

Meanwhile, Amped's competition has closed the gap and begun to overtake them in the marketplace. Dr. Rudin, sensing this, is about to make a big push with some new tools (hint: video authentication).

But, with all of this in mind, probably the oddest development is Foclar's entry into the US market. Foclar's Impress isn't a new product. It's been available in Europe for years. But, in terms of development and functionality, it's about where Amped was on FIVE's entry to the US market ten years ago. Let's see if Foclar has the wherewithal to play catch-up. As Amped raises it's prices for its products and services beyond what the market will bear, they'll leave the door open for Foclar to make some headway - should that European company choose to compete on price.

Gone - for the most part - are Signalscape, Salient Stills, and Ocean Systems. These companies still exist as fractions of their former selves. Signalscape's StarWitness is gone - replaced by interview room recorders and evidence processing machines. Salient Stills never really owned the IP in VideoFocus - which is now a part of DAC / Salient Sciences. As for Ocean Systems, it seems as though it exists at the will and pleasure of Larry Compton (who might be it's only employee). They're still offering the Omnivore, the Field Kit (Omnivore + a laptop), and Larry's training courses. Legacy products are on autopilot. Has ClearID been substantially improved / updated since Chris Russ' sacking years ago? I don't think so. I never understood the value proposition of taking 8 of the algorithms from OptiPix / FoveaPro (which were collectively priced at about $500 for about 70 algorithms) and charging much, much more. Sure, you got a new UI. But was that worth the price? I still have an old MacBook running PS CS2 just to run Chris' old stuff.

Have a great day my friends.

Sunday, September 15, 2019

Does the FBI offer classes to the general public?

In the final installment of the analysis of the testimony in the Sidney Moorer retrial, I want to focus on one element of the testimony that happened around the 8 minute mark of the video (link). The interesting statement comes immediately after the defense attorney asks Grant to find FAVIAU in the FBI Handbook (the FBI's lab directory?). Grant correctly notes that FAVIAU is not a part of the FBI's laboratory. If you don't already know, FAVIAU is a part of the Operational Technology Division.

In finishing his statement about the correct name of FAVIAU - the Forensic Audio Video and Image Analysis Unit - Grant notes that he's taken classes from them.
I touched on this in a previous post, but wanted to dive a bit deeper into his comment. Can one "take classes from the FBI?" Has the FBI ever offered training to competency in the topics around "video forensics" that was open to non-DOJ personnel?

I'm asking this because I have, on several occasions, been a training provider to the DOJ's various agencies. Thinking back to the amount of hoops necessary to get "on campus," it's hard to imagine that the FBI would permit folks to come to them to receive training.

I am aware, however, that DOJ facilities were used by LEVA in the olden days. To the best of my knowledge, this use was not "official," and the way in which LEVA used these facilities was the subject of some controversy - so much so that the relationship ended long ago. In these instances, individuals were "trained by LEVA," at a DOJ facility.

Thus, could Grant mean that he attended a training session facilitated by LEVA, but held at the FBI's facilities in Northern Virginia (on/at FAVIAU)? I don't know what he meant. I can only assess what he said - "... and I've taken classes from them."

The comment certainly has nothing to do with his contract work at the FBI National Academy. Grant presents information sessions to students (usually law enforcement middle-management types) on the ways that video may assist their investigations. His presentations are not a mandatory part of the program, rather serving as an "elective" or "enrichment" session. Nevertheless, he's been doing that for quite a while now.

Nevertheless, if you're reading this and you are not / were not a DOJ employee, and have a training certificate indicating that you were trained in some element of video analysis by the FBI's FAVIAU - please take a picture of it and send it to me. That certificate, if it exists, may be this discipline's "black swan."

Have a good weekend my friends.

What is Headlight Spread Pattern Analysis?

Continuing in the review of the Sidney Moorer retrial, there's an exchange between Grant Fredericks and the defense attorney that deals with the foundations of "Headlight Spread Pattern Analysis." It starts almost immediately into the cross examination.

Around 3:13 of the video (link), the attorney asks Grant if he considers himself to be a scientist. I think that the attorney was trying to establish that if Grant were to consider himself a "scientist," then surely he should adhere to the standards and practices of science. Grant doesn't give a yes/no answer, steering his answer more to practice.

Around 3:42 of the video, the attorney tries to steer the questions towards "Headlight Spread Pattern Analysis" and if it is a "new / novel" practice. I believe this line of questioning was to attempt to convince the court that if it is new / novel, then it hasn't been generally accepted and thus it must not be weighted the same as "regular science." That line of questioning didn't seem to stick.

Around 5:39 of the video, the attorney asks Grant if he engages in new / novel approaches. Grant answers that he doesn't. The follow up question deals with any peer-reviewed publications that Grant has published to (I think - the questions were a bit hard to follow - the judge seems to admonish the attorney to slow down and ask one question at a time). I've searched for any publications that Grant has authored either demonstrating or validating the comparison of recordings of the spread of light from the head lamps vehicles and have found no publications at all - none with his name, none with anyone's name.
I think the attorney then goes on to ask about this analysis type and it's "general acceptance" in the field. Grant indicates (6:04) that it is generally accepted. But, in answering the question, he uses the phrase "to the exclusion of all others." I can't find a record of any peer-reviewed / published study that validates the techniques he's described. This would lead us to a discussion of "I've testified about it" vs. "I've published a validation study in a peer-reviewed journal." If, by general acceptance, Grant means the former, he's correct. If, by general acceptance, he means the latter, then he's incorrect. Clearly, Grant seems to mean the former whilst the attorney means the latter.
The attorney then appears to want to clarify this point and asks if  "Headlight Spread Pattern Analysis" is for the purposes of identification generally accepted. Grant answers "yes." Grant is then asked if he's the only one that has testified as to such an analysis. Grant answers "no."

Then, the period between 6:20 and 8:02 takes an interesting turn.
At 6:20 of the video, Grant states that he received his training "from the FBI on reverse projection for this purpose." Again, we have the problem with pronouns. But, considering that the discussion is "Headlight Spread Pattern Analysis," the implication here is that the FBI has trained Grant Fredericks to competency in the analysis of recordings of light emitted from the headlamps of vehicles? Can this be a true statement?

Consider what was said. Review his answer in the video.

When I attended a conference lecture by Walter Bruehs, of the FBI's FAVIAU, was I trained to competency in a particular performance task by the FBI? Of course not. I listened to Walt. Maybe I took some notes. I was happy to listen to what Walt had to say and learn his perspective on the topic. But, by no sense of the term was I trained to competency by the FBI. I use the term, "training to competency," specifically as such a training is the only type of training that should count towards subject matter expertise.

To attempt to resolve this issue, let's turn to the ASTM and it's standards around training.

Our first stop will be ASTM E2917-19a - Standard Practice for Forensic Science Practitioner Training, Continuing Education, and Professional Development Programs. In this document, the following definitions are helpful.

  • Training, n—the formal, structured process through which a forensic science practitioner reaches a level of scientific competency after acquiring the knowledge, skills, and abilities (KSAs) required to conduct specific forensic analyses.
  • Knowledge, skills, and abilities (KSAs), n—the level of information, qualifications, and experience needed to perform assigned tasks.
  • Competency, n—demonstration that a forensic science practitioner has acquired and demonstrated specialized knowledge, skills, and abilities (KSAs) in the standard practices necessary to conduct examinations in a discipline or category of testing prior to performing independent casework

Then, let's examine ASTM E2659-09 - Standard Practice for Certificate Programs

  • Learning event, n—combination of learning experiences designed to assess a learner’s understanding of content or his/her ability to perform a skill or set of skills that satisfies a set of learning objectives/outcomes. This event can be accomplished by any media sufficient to achieve the learning out-comes, including but not limited to, classroom instruction, distance-learning course, blended-learning activities, conferences, and satellite transmissions. 
  • Criterion-referenced method, n—approach to determining a passing standard for a learner assessment based on subject matter expert-identified performance standards and not based on the performance of other students. 

With these definitions, we can see that "training to competency" involves a formal learning event (class or series of classes), with a defined curriculum, the goal of which is the competency of the learner, and is completed by some objective summative assessment.

Does sitting at a LEVA / IAI / NaTIA conference lecture fit this definition? No. When I presented at LEVA or NaTIA, it was an "informational session," not training to competency. I certainly hoped that you learned something, but there was no mechanism in place to objectively assess that learning.

I do, in fairness, remember a time when employees of the the FBI attended LEVA conferences to demonstrate how they performed reverse projection exercises in the field. I remember that these were presented as "information sessions." with the admonishment that attendees should not consider themselves "trained" or "trained by the FBI." Further to the point, as part of those demonstrations, the "thing" being measured was usually a person - not the pattern of light emitted by a vehicle's headlamps and recorded on a DVR. Thus, I'm not sure how Grant was trained in reverse projection by the FBI for the purpose of the analysis of recordings of light from vehicles.

Moving on, the time period from 21:00 to about 27:00 provides more mystery as to just what this thing we're calling "Headlight Spread Pattern Analysis" is.

Grant is asked if "Headlight Spread Pattern Analysis" is a subset of measurement. Do you do measurements? Is "Headlight Spread Pattern Analysis" a measurement exercise? No is the answer. Grant responds that "Headlight Spread Pattern Analysis" is different. "Headlight Spread Pattern Analysis" is a comparison. (The captioning system had a hard time with the cross examination)
So, if "Headlight Spread Pattern Analysis" is a subset of forensic photographic comparison, then it should follow the community's consensus "Best Practices" for that discipline - I would think. For that, we can turn to SWGDE's Best Practices for Photographic Comparison for All Disciplines. Version: 1.1 (July 18, 2017) (link).

With a fair reading of the SWGDE document, a question is required. Is the presence of a  "headlight spread pattern" in a recording a "class characteristic" or an "individualizing characteristic" of a vehicle? I think that it can be both, depending on the circumstances. 

But, as Grant has declared "Headlight Spread Pattern Analysis" to be a subset of forensic comparison comparison, and SWGDE's guidance in the above referenced document notes that "[t]he results of the examination should undergo independent review by a comparably trained individual. If disputes during review arise, a means for resolution of issues should be in place," who independently reviewed Grant's conclusions? This issue, in my opinion, is where the bulk of the cross examination should have went. Unfortunately, it didn't.

Thus, we have from Grant that "Headlight Spread Pattern Analysis" is a subset of forensic comparison comparison. One is just analyzing an object in a recording - no different from analyzing any other object in a recording - and comparing it to objects in other recordings. 

Except that we don't. He seems to insist that "Headlight Spread Pattern Analysis," whilst a comparison exercise is it's own science with it's own publishing history. He offers up a list of publications to reinforce his opinion at 23:00 in the video.

Grant offers a publication, “Sensitivity Analysis of Headlamp Parameters Affecting Visibility and Glare” (2008) DOT HS 811 055" (link) seemingly to buttress his opinion that all headlight spread patterns are different or unique. Having reviewed the document, I can tell you that the document has nothing to offer in informing a discussion on comparing features in a recording.
Having searched the document, the phrase "headlight spread pattern" appears exactly zero times. When the word "spread" is used (just one time in the document), it's employed in the context of statistics - as in the range of measured luminance values. Pattern appears nine times, mostly in combination with headlamp and beam. "Headlamp beam pattern" is discussed not from a comparative standpoint between different vehicles, but from a safety and perception standpoint of drivers. The purpose of the document is to inform a safety discussion - how to protect on-coming motorists from the potentially negative affects of a powerful headlamp beam. The word "unique" does not appear. When the word "identify" appears, it's within the context of purpose statements for the various experiments proposed. Given this, the referenced document can not serve to inform a discussion on the comparison of objects within recordings.

The second (almost) complete reference given is to a "study" conducted by Grant's LEVA colleague, Scott Kuntz, at Dane County (WI) Sheriff's Dept. He indicates that a number of "Crown Vics" were "pulled off the (production) line" and studied. I've searched for this study. I can't find it. Grant did not say if it was peer-reviewed or published. I tend to think that it wasn't. If it was a presentation topic at a LEVA conference, it doesn't count. LEVA is not a peer-reviewed forensic science journal publisher - as opposed to the IAI and it's Journal of Forensic Identification.

Grant also references the LEVA Glossary's mention of "headlight spread pattern." But, a mention in a glossary - the creation of which Grant took part - is not proof of anything. The term's definition is not proof of scientific validity or suitability. 

Another odd question from the defense (26:56) asked about a mention of Grant in one of Dr. John Russ' books, Forensic Uses of Digital Imaging, Second Edition (link). Grant helps to clarify the question (thankfully) as merely a mention, on Dr. Russ' part, of someone who engages in such things. Certainly not a peer-review of "headlight spread pattern analysis."

Thus, we're left with the impression that an analysis of the recordings of headlamp beam patterns (NHTSA) / headlight spread patterns (Grant) is something rather specialized, yet essentially a comparative exercise. Given this, the normal rules for comparisons - and the conclusions that come from them - should apply. This means that a similarity / difference in the studied object should be logged, but that a single "individualizing characteristic" can't be used to declare that an object is unique "to the exclusion of all others."

I hope this helps.

Have a great weekend my friends.

LEVA's training and certification programs accredited by the University of Indianapolis?

Following up on the previous posts, there were just a few points in the cross examination of Grant Fredericks in the Sidney Moorer retrial (link) that left me with questions.

Around 18 minutes into the cross examination, the attorney for the defense begins a line of questioning about LEVA.

The first set of questions deals with LEVA's membership.

First, Grant is asked if LEVA is primarily a "law enforcement" organization. Grant doesn't quite answer the question, not having visibility into LEVA's membership roster. But, I have some numbers on membership that aren't all that old. I'll answer the question for him. The answer is yes. The membership of LEVA is heavily skewed towards state and local law enforcement employees.

Around 19:23, Grant describes the LEVA training program and compares it to a college Masters degree. To be fair, the average college Masters program consists of 10-13 courses of 3 credit hours each, with each course covering 20-24 class hours. At the low end, a Masters program would have the student in class for 10x20=200 hours. At the high end, the student would be in class for 13x24=312. This is exclusive of the program's Thesis or Capstone requirement. LEVA's Certification Program (link) consists of the individual Levels (4x40=160 hours), 88 hours of Image / Video training, 24 hours of courtroom testimony training, and 48 hours of specialized training - 160+88+24+48=320 hours. The problem with the language employed in Grant's response was that it seemed that he was equating the rigor of a college Masters' program with the rigor of LEVA's certification program - which are simply not comparable. But, to be fair, the amount of time spent on each is roughly comparable.

It's at 20:26 where my jaw hit the floor.
According to Grant, in his testimony in this case, LEVA's training and certification program is accredited by the University of Indianapolis. I'm including the screenshot (above) that references the time in the video as well as the closed captioning that shows what was said.
This, of course, was news to me. I am aware of LEVA's history of attempting accreditation. At one time, I was a part of that discussion. I am aware that their programs are not, actually, accredited.

Given that the purpose of this exercise is not to poke fun at an individual or organization but to attempt to glean meanings, what on earth could Grant have meant?

Perhaps, Grant meant that the time of the launch of the LEVA Lab at the University of Indianapolis, the courses that were offered in that space were covered by the University's accreditation. I would be skeptical of this, however, as my own experience in holding "private" courses on a college campus, outside of the regular college calendar offers a different view. When in discussions with schools like Woodbury University, the University of Central Oklahoma, California Polytechnic University, Pomona, and Laramie County Community College, the discussions were around "work for hire" and "renting space," not that any course that I (or my companies) would offer would be accredited under the school's accreditation regime. Having prepared for the process of accrediting Apex's micro learning offerings through DEAC (link), I'm well familiar with accreditation regimes.

Needless to say, I've reached out to LEVA for clarification on this issue. I'll let you know, dear reader, if / when I receive a response.

Saturday, September 14, 2019

Accessible testimony

After making my way through Grant Fredericks' direct testimony in the Sidney Moorer Retrial (link & link), I'm left with an overall impression as to the accessibility level of testimony - in a general sense, can the listener follow along easily? Remember, this whole exercise for me was to assess the effectiveness of testimony, ascertain which words and phrases work better than others, and attempt to reconcile my own stile with that on display in the videos.

As an L2 English speaker, I sometimes have problems keeping track of pronouns in a complex sentence (also why I favor the written over the vocal). If you fell asleep in English class, a word is commonly classified as a pronoun when the pronoun is used to replace an object that has already been mentioned or can easily be known. Words like "it" and "this" can be classified as pronouns, as an example.
As I listened and reviewed the badly translated captions, I was often confused by which "it" or "this" we were considering. Comprehension and tracking of items was especially problematic when the discussion turned to multiple trucks or multiple DVRs. Of which item was Grant referring? His intention might have been apparent to those in the jury box, with the ability to "see" what was on display. But, one should never assume. Thus, I think a more formal approach, avoiding pronouns as much as possible, might be the way to go.

 So I'm curious. How aware are you of the use of pronouns in your testimony? Just a thought.

Enjoy the weekend my friends.

What does virtually indistinguishable mean?

Continuing on in my quest to understand words, meanings, and styles - using the direct testimony of Grant Fredericks in the Sidney Moorer Retrial (link), I was struck by a "conclusion" made towards the end of the direct examination.
At 25:01, in response to a question of comparison between the "questioned" and "known" vehicles, Grant makes the statement that one is "virtually indistinguishable" from the other. So, the obvious question, what does "virtually indistinguishable" mean.

Indistinguishable can mean, "not able to be identified as different or distinct." Would the use of the word, "indistinguishable," mean that Grant is saying that the "known" and "questioned" vehicles are one and the same? No. I don't think so.

Grant uses the modifier, "virtually." Virtually, as a modifier, means "almost" or "nearly."

Why?
Remember, SWGIT's Section 16 - Best Practices for Forensic Photographic Comparison? The above graphic is Figure 4 from that document (found on pg 7). It's a conclusion matrix for use when engaged in comparative analysis - as Grant is in this case. I'm not sure where "virtually indistinguishable" lies on the continuum of conclusions from Figure 4. It seems that he's saying that there is similarity between the two, but he stops short of determining that there's a match, an "identification." Thus, is there moderate support, strong support, or powerful support? Where is "almost" or "nearly" on the spectrum?
I guess the problem can boil down to a "we see" methodology vs. an "I have cataloged the following / listed features" methodology, as illustrated in the SWGDE Vehicle Make / Model Comparison Form (link) that I use in my casework and classes. How many "we see" instances equals an identification or a match? There's no evidence in his direct testimony that the SWGDE Vehicle Make / Model Comparison Form informed his work or his testimony.

This is my problem with language. This type of non-conclusion conclusion is why I cite the relevant standards, as well as the relevant definitions in my reports. I don't make up new words and phrases. I use the words and terms as defined by the community in it's consensus standards.

It's almost time for LSU football, my friends. Then, I'll study the cross exemption in this case.

Have a great weekend my friends.

Consistency is not match

Further in to the Sidney Moorer Retrial videos on YouTube (link), I've come across a word used in testimony, "consistent" (1:16:24).
... and again at 1:19:52.
What does it mean to be "consistent?" Consistent is defined as compatible or in agreement with something. Consider the following images.
The image above is of Kevin McHale of Boston Celtics fame. Kevin is about 6'10" and of northern European ancestry. In this picture, he's holding a basketball in an arena during a game with the Sixers.

Now, examine the image below.
The unknown male holding the basketball shares similarities with Kevin McHale. He's either jumping or is taller than those around him. He's holding a basketball, albeit with one hand. He's in an arena. He has a similar skin coloring. And based on the clothing, the image seems to be from the same era in terms of sports fashion.

But, the unknown male is looking away from the photographer.

Given all of the similarities, should the "I can't rule out Kevin McHale as the unknown male in the image" methodology apply? Of course not. But, the unknown male is "consistent with" Kevin McHale. Surely, you can "see" that. Isn't seeing that they're both tall caucasians playing basketball enough "consistency" to conclude that they are the same person? Hopefully, you answered that in the negative.

You see, my investigation of the second image determined that the unknown male holding the basketball was, at the time, 6'6" tall, not 6'10". Also, Kevin McHale went to Hibbing High School in Hibbing, MN. I have determined that this image was taken at Whittier High School in southern California. How did I determine these facts? I'm #51 in the image. That was me, in my sophomore year in HS, playing varsity basketball. I know where I was and how tall I was at that age.

The point being, a tall white guy is "consistent with" another tall white guy. But, consistency is not a match - not an identification. It's not proof. Using "consistent with" is an equivocation, nothing more.

I probably should be out playing basketball, but these videos are fascinating.

Have a great weekend my friends.

Demonstrative evidence, non-experimental research, and the opinions of experts

Further into the Sidney Moorer Retrial videos on YouTube (link), a fascinating exchange happens around Grant Fredericks' explanation of his analysis of the recordings of the depiction of the questioned vehicle's head light illumination as captured by digital CCTV systems.

Picking up on the previous post's analysis of specific phrasing and word usage, this testimonial experience is a treasure trove of semantical goodies.
At 40:30 into the video (link), we're presented with two seemingly complimentary terms which are, in fact, opposites. The attorney asks Grant about his processes involved in making his "determination," and his response focuses on his "observations." I've written about this topic previously (link). In general, non-experimental research (aka "observation") can not help to determine causes, or the reasons "why" things are so.
As such, you won't know if what you are observing is accurate or in error in some way (or "consistent"). How many (number) observations moves the exercise from non-experimental to experimental (trick question)?
You see, to "determine" is to ascertain or establish exactly, typically as a result of research or calculation. Thus, "observation," with all of the built in bias and error potential can't lead to a "determination" or a conclusion that is reliable and reproducible (which is a must in the Daubert world). We saw this problem illustrated in Exhibit A on Netflix - with Grant arguing for determination over observation.

At 43:05, there is a discussion about the spread of lights from the vehicle over the roadway, as captured by a recording system, and how Grant includes the "headlight spread pattern" as a Class Characteristic. Given that SWGDE Best Practices for Photographic Comparison for All Disciplines. [Version: 1.1 (July 18, 2017) (link)] defines Class Characteristic as "a feature of an object that is common to a group of objects," he is correct. Vehicles tend to have headlights. Carrots, another class of object, do not. For the spread of a pattern of light coming off a particular vehicle, and captured by a particular recording system, to be an Individualizing Characteristic (a feature of an object that contributes to differentiating that object from others of its class), it would have to be rather unique.
Why?

Consider human finger prints. How unique are they? How are they examined? What role does the examiner play in determining a match when comparing a known to an unknown impression? What role does the quality of each impression have in this process? Why does the impression evidence community insist upon high quality scans when digitizing impression-based evidence? Why indeed? The quality affects the results?

If the recording quality of the vehicle's headlight spread pattern is low, then can the result of "match" be trusted? If the recording quality is low, might the results be different when a higher quality recording system is used?

I ask this because a literature review seeking results of experiments in this area has come up empty. No one has published the results of experiments done quantifying this phenomenon or technique whilst controlling for ambient light or recorder system type. With that in mind, Grant's statement at 43:15 are quite puzzling. He states that there is significant research, study, and publications establishing headlight spread pattern analysis. I have found this to not be the case.
With research library access at two US universities, I can find absolutely no literature that establishes the analysis of the spread of illumination from automobile headlights as a science, or even a practice. Thus, his comment at 46:57 is rather curious. Where are all of these studies that he's read? Isn't the point of Daubert's insistence on peer-reviewed and published sources to enable others to peek in on what you've used to inform your opinion? The mechanical / safety specifications for the canting of headlights in a given market are interesting trivia. But, they are not relevant in terms of the analysis of light patterns captured by recording systems.
Nevertheless, he continues on to note that his work used a sample of eleven known vehicles in an attempt to "eliminate" the "unknown vehicle" from the inquiry. In his work, we was not able to eliminate the vehicle. We'll get there in a second, but first, let's look at a sample size of eleven.
Stepping into the role of Stats professor, let's look at a Power Analysis of that sample of 11 vehicles. At a sample of 59 vehicles, the analyst has about one chance in a hundred in being wrong (see above graphic). With a sample size of 11 for the type of test being conducted (generic binomial / match-no match), the analyst has 8 chances in 10 of being incorrect - of missing the effect if there is one to detect. This sample, eleven, might have been tough to organize at the time, but it's still a sample of convenience and should be noted as such. Additionally, the analyst should note the power analysis of that convenience sample and any affect it might have on a conclusion in the limitations section of their report.

That's if the analysis considered 11 of the same make / model / year / trim of the "questioned vehicle." It didn't. From ~1:24:00 forward, Grant walks the jury through a series of demonstratives "comparing" a single "known vehicle" of differing type to the "questioned vehicle." Essentially, he's assembled eleven single sample groups. Wow!
As to the inability to eliminate the vehicle from consideration, this is quite problematic - and not just from a general semantics point of view. Consider what is said in light of US jurisprudence. Grant is there in response to his work for the State. The State has the burden of proof in any criminal proceeding. Thus, the first step in Grant's workflow should have been to "determine" the make / model / year / trim level of the unknown vehicle. If he was able to accomplish that task, the results could inform the comparison of the "known" to the "unknown."

But he doesn't do this. He says that he "can't eliminate" the vehicle from consideration. This is not proof of anything. It's equivocation at best and obfuscation at worse. It perverts the course of justice to, in essence, have the defendant attempt to prove a negative. Besides, a lack of exclusion is not proof of inclusion.

There's more.

Grant's testimony about the quality of the observed reflections requires one to ask about the measurement of said reflections. Given that no-one was at the scene on the night in question, how was the "historical footage" evaluated? What role did the recording system play in "establishing the diffusion" of the light coming from vehicle's head lamps?

Let's say you wanted to run that experiment and control for as many variables as you could in an attempt to quantify how a particular recording system captures the light from a scene. How many samples would you need to have confidence in your results? Try over 160 test recordings for size.
But, in this case, we have only one (as far as I can tell). A coin flip is less risky in this case. Thus, how can any sense of the quality of light be "established" if it wasn't quantified in the first place. Remember, we need to build a model of performance of the system because we weren't there at the time to accurately measure. As such, we must predict the system's performance. A logistic regression is an appropriate test in this case, and the sample size calculation represented above is for just such an experiment. This is what you would need to properly evaluate the system.
As a side note, in conducting the experiment, you would attempt to duplicate the conditions of the event in question. But, that's not what was done. At 1:46:40, Grant notes that the evidence recording depicts the "questioned vehicle" in motion (with some motion blur), whilst his experiment featured stationary vehicles (but the "drive by" vehicles were in motion, at some unknown speed) ... but I digress.

Finally, does every vehicle in the world - every vehicle ever produced or every vehicle that will ever be produced - have a unique headlight spread pattern?
Perhaps, yes. Perhaps, not. If it has been tested, the results have not been published in an accessible location. BUT ... the way in which light is emitted by a vehicles head lamps is not what is at issue here. What is at issue is the RECORDING of that light. Could a vehicle's headlight spread pattern change in relation to the system that has recorded it? Of course. Grant says exactly that at 47:08, "with enough resolution, no vehicle shares the same headlight spread pattern..." Thus, without "enough resolution," can many vehicles share the same pattern?
The problem is, in reviewing the testimony in this case, I don't see where "enough" was established or the capabilities of the system quantified through reliable and reproducible science.

Finally, in his "comparison" of various models of trucks vs. the unknown vehicle, he notes that "the" particular truck tested "is eliminated," not that the entirety of all of that particular vehicle's make / model / year / trim line can be eliminated.
This case and the testimony is fascinating.

Have a good weekend, my friends.

Semantics, General Semantics, and Definitions

I have not hidden the fact that I am autistic, or that in my spare time, I spend a lot of time teaching in the autistic community. I've also not hidden the fact that as a non-verbal autistic person, I consider my own processing of the English language to be "L2" - a "foreign language." Thus, I pay particular and close attention to the specific words that are used by myself and others when attempting to understand what is said around me.

Why bring this up?

Because I am not as proficient in the use of the English language as I would like to be, I enjoy watching others whom I consider to be skilled practitioners of the English language engage in the same types of activities in which I too regularly engage. Thanks to the internet - I can do so in the comfort of my own office or home.

For example, the Sidney Moorer Retrial is being broadcast by the Law and Crime Network, and the testimony is also available on YouTube (link to playlist).

Because of my relatively slow processing of the English language, and my attempt to process the meanings of specific words used, I usually have captioning / subtitles turned on. I appreciate that platforms like YouTube offer such accommodations. I also appreciate that I can pause, or play back selections to insure that I understand what is being said.

Take the following moment in Grant Fredericks' testimony, for example (see screen capture below).
Grant is being qualified to present his opinion on matters in this case. When asked about his background, he notes that he has a degree in Broadcast Television. His having acquired such a degree is a verifiable fact, meaning you can look this up at the National Clearinghouse (link) to verify that he does actually have the degree he's just mentioned.

He then mentions an "emphasis," or the concentration / focus of his degree work.
According to his testimony, he notes, "my emphasis was on television engineering." This "emphasis" is something that is not verifiable at the Clearinghouse. Here's why.

Note the use of the word, "my." It was his emphasis. In the language of autistics, it was his "special interest" or "self-actualizing" pursuit. It was likely not a feature of his degree path, nor would such a personal / special interest be reflected on his actual degree certificate.

Here's what I mean. As an undergrad, initially, my degree path was Political Science. But, as a football player, "my emphasis" was on football. Nothing about that emphasis was recorded on my eventual degree, save for the more than a dozen credit hours of football related courses I attended. My eventual two-year degree diploma in Political Science makes no mention of "my emphasis" in football.

On the other hand, related to my teaching work, I am currently working on some continuing education courses via the University of Toronto's Ontario Institute for Studies in Education (UofT OISE). The end result of my efforts will be a 150 hour certificate in Teaching English as a Foreign Language (TEFL). Knowing how to teach English as L2 is an important skill for educators working with autistic students. As part of that program, I have chosen "an emphasis" in teaching Business English as well as teaching native Mandarin speakers (autistic people in Mandarin speaking communities are particularly disadvantaged by their culture). This emphasis is "an emphasis" as it's included in the overall program design and listed on my certificate.

What this illustrates is something I regularly feature in my introductory courses - truth vs fact. When swearing in at trial, we swear to tell the truth, the whole truth, and nothing but the truth. Grant has done just that. It is true to Grant that he chose focus his efforts in the declared area. It becomes fact when it becomes measurable or observable - or verifiable. If the efforts are listed on his degree certificate, then it's verifiable fact. If not, then it's truth - Grant did these things out of a personal desire or special interest - "my emphasis" indicates this. Given this rule of general semantics, truths aren't subject to rebuttal or objection. How are we to know his mind and heart on the matter. Likewise, you must take me at my word that as a teenager in college, football was my emphasis.

Later on in testimony, Grant is asked to explain his methodology that he employed in the case (41:40).
Grant notes that a comparison is "normally done through what is called reverse projection..." Here, my focus is two-fold (saving the "methodology" discussion for a future post). What does "normally" mean and what is "reverse projection."

As I've noted previously (link), "Reverse projection," in and of itself, is a simple exercise that results in the creation of a demonstrative exhibit. It's not a comparative exercise. It's not a measurement exercise. It can set these up, but in and of itself, it's just a visual demonstration of a single theory of a case. It has limited utility in Daubert states where one must account for alternative theories of the case. If one wishes to measure, then single image photogrammetry (link), a valid method with a long history of use around the world, is the option that I prefer.

The other portion of this statement that is significant is the use of the word, "normally." What does this word mean, "normally?" In statistics, a normal distribution would include the majority of a studied population. Is Grant stating that a majority of analysts create these types of demonstratives? Perhaps. But, given that the creation of a demonstrative is a separate process from the analysis of data, I'm not sure if one has anything to do with the other. Also, has this "normality" been quantified? I don't think that a survey has ever been conducted.

These are but a few examples within this fascinating testimonial experience. There are others that I'll explore in future posts.

Watching other analysts testify on YouTube might not be your cup of tea. I enjoy it. Studying the testimony of others is one way learn what words work, what words don't work, and what questions you might face in your own testimony.

Watch the videos. Let me know what you think.

Friday, September 13, 2019

Review: Forensic science. The importance of identity in theory and practice

A paper was recently published over on Science Direct (link) exploring what may be a "crisis" in forensic science. Here's the abstract of the paper:

"There is growing consensus that there is a crisis in forensic science at the global scale. Whilst restricted resources are clearly part of the root causes of the crisis, a contested identity of forensic science is also a significant factor. A consensus is needed on the identity of forensic science that encompasses what forensic science ‘is’, and critically, what it is ‘for’. A consistent and cogent identity that is developed collaboratively and accepted across the entire justice system is critical for establishing the different attributes of the crisis and being able to articulate effective solutions. The degree to which forensic science is considered to be a coherent, interdisciplinary yet unified discipline will determine how forensic science develops, the challenges it is able to address, and how successful it will be in overcoming the current crisis."

The article seems an exploration of struggle for identity, as the title suggests. What is it? What are we to do with it? What happens when it's not done correctly? Who's responsible for reform? Curiously, even as the paper notes the work done on forensic science in the US, it omits reference to the 2017 A Framework for Harmonizing Forensic Science Practices and Digital/Multimedia Evidence (link). I point this out because the OSAC's document provides a solid definition of forensic science that can serve as a foundation from which to explore the paper's topic, as well as to provide a path forward for research and practice.

As a reminder, the definition of forensic science, "The systematic and coherent study of traces to address questions of authentication, identification, classification, reconstruction, and evaluation for a legal context." "A trace is any modification, subsequently observable, resulting from an event."

I think that within the OSAC's definitions, the goals outlined in the paper can be achieved.

Nevertheless, the likely root of the "crisis" as observed by the author can be seen in another area altogether - bias.
The topic of bias, and the many places it influences the justice process has been explored in depth. Particularly, these two papers (link) (link) explore the impact of bias from the standpoint of each of the stakeholders and the influence of modern media.

Another place where bias occurs is in the selection and continuation of cases by the prosecution. For example, what effect does linking a prosecutor's "win / loss" record to their promotability within their organization have on their decision making process? Once they've filed a case, is there evidence of an "escalation of commitment" when confronted with problems in proving their case? With "escalation of commitment" bias, the prosecutor may seek out "fringe techniques" in an attempt to support their theories of the case. Is it these "fringe techniques," not forensic science, that have contributed to the observed "crisis" - as was illustrated in Netflix's Exhibit A?

Still, I'm glad to see that people are starting to identify that there might be a problem in the practice of "forensic science." How that problem, or "crisis," is addressed will make all the difference in the world.

Have a good weekend my friends.

Thursday, September 12, 2019

Can Amped's Authenticate assist in detecting "deep fakes?"

Short answer: no.

Long answer: everything old is indeed new again.

In 2016, I presented an information session on authentication at the LEVA Conference in Scottsdale, Az. Here's the session description:

Understanding Concepts of Image Authentication Workshop (2016)
This workshop is for those interested in the authentication of digital images. The workshop provides an overview of the techniques and skills necessary to perform basic authentication examinations using Amped Authenticate (Axon Detect) on digital images in a “forensic science” setting as well as to package, deliver, and present those findings in their local court room context.

At this year's LEVA Conference, it seems that the old topic is being dusted off, with one small addition. Can you spot the main difference between the 2016 session description (above) and this year's (below)?

Authenticate: The Beginners Guide to Image Authentication* (2019)
Image authentication techniques have multiplied over recent years. The simplicity of Image editing and the increase of bogus imagery, “Deep Fakes”, being identified in the media has, quite rightly, meant that methods to detect manipulation must be available to the legal system.

The difference: "Deep Fakes"

There's one big problem though. A "deep fake" is a video. Authenticate doesn't work on video, only images of a specific file type.

In my forensic multimedia analysis course (link), I feature a number of proposed techniques that address so-called "deep fake videos." All of the solutions work within a fusion-based methodology - requiring different tools and applications of those tools to identify the many components that make up fake videos.

But the wider question, why use the term "deep fakes" incorrectly to market an information session on an image authentication tool? Is this SEO gone wild? I don't know. At a minimum, it's confusing. At the maximum, it's deceptive. Hopefully, you weren't just going to learn about "deep fakes." If so, you're sure to be disappointed - again, Authenticate does not process or authenticate video.

Have a great day my friends.

Wednesday, September 11, 2019

The best artists steal

It's an old adage in the art world that good artists copy the work of their masters, but the great artists steal it (link). With that in mind, a reader of the blog pointed out the apparent similarities between the Camera Match Overlay in Input-Ace (link) and a patent on file at the USTPO (link).

If you read the marketing around the Overlay tool, you might begin to notice the similarities between it and the patent. But, don't be fooled. US Patent 9,305,401 (link) describes a system for building out a scene in 3D utilizing images from that scene, then conducting various measurements. Overlay does none of that. Overlay does exactly what the name implies - it allows you to take the user interface of Input-Ace and "overlay" on top of the user interface of the tool with which you are actually conducting your measurement exam. The idea is that you can infer the location of images present in the images loaded in ACE, and overlaid on the measurement tool, from the data in the measurement tool. Because of this, I don't think ACE is infringing on anyone's patents - in my opinion.

Yes, the patented process is operationalized into Cognitech's software, and that software is actually reconstructing the scene and performing the measurement exams. No, Input-Ace's Camera Match Overlay tool is not reconstructing a scene and is not used directly in conducting the measurement.  Yes, it does lend a hand in attempting to calculate the range of potential measurement values (and thus the error potential in the measurement), so you'll need to be extra careful in how you report and present your results using their methods. You'll also need to explain how you validated your unique results. Yes, you should validate any tool used in your work as well as the results that lead to an opinion / conclusion.

Thanks for reading. Keep the comments coming. Have a good day my friends.

Tuesday, September 10, 2019

Detecting 'Doctored' Images


Just a wee reminder that authenticating multimedia evidence (aka 'spotting doctored images on the internet') is a bit more complicated than finding the "doctor" in the image.

If you'd like to learn the underlying science behind authenticating this complicated evidence type, check our our course - Forensic Multimedia Authentication (link). Offered on-line as micro-learning, seats are always available and you learn at your own pace. Click on the link for more information, or to sign up today.

Monday, September 9, 2019

Everything old is new again

Last month, I took a look at the new Camera Match Overlay feature in Input-ACE. The Overlay feature can be used in conjunction with a 3D laser scanner and it's accompanying software to create demonstrative exhibits.

I'm not a big fan of the feature, preferring single image photogrammetry to conduct measurements within the evidence items without creating brand new exhibits. But, it occurred to me ... one of the nice things about getting old is you get to see history repeat itself.

When I first arrived at the LAPD in 2001, it had a video / image processing workstation from Cognitech. It was then that I first met Dr. Lenny Rudin. Last month, I was in Pasadena to present a lecture and ventured over to CalTech's amazing restaurant to have lunch with Dr. Rudin and catch up on what's new with him and Cognitech.

Our conversation bounced all over the place and eventually exceeded the allotted time that the restaurant had for lunch service. I like those conversations where time ceases to be a factor, just enjoying the topics and the company.

Of course the current state of the industry came up. Who's who and what's what. If you're reading this and you don't know the names (Cognitech and Dr. Rudin), that's a shame. Dr. Rudin is one of the founders of this thing we now call Forensic Multimedia Analysis, but has largely been written out of the history by those with a more commercial agenda. It's only the old folks who know the likes of Dr. Rudin, Dr. Russ, and the like.

Nevertheless, we discussed input-Ace's Overlay feature a bit, noting it's similarity to Cognitech's Measure package that began life in the 90's (now called AutoMeasure).

Everything old is new again. You can find the 1995 paper that describes Cognitech's Measure over at the SPIE (link). It's not an "overlay" procedure as such, but a single image photogrammetry method that builds out the 3D space within the 2D image. Unlike Overlay and its mixed-methods approach, Cognitech's Measure does have a validation history available in the literature.



Over lunch, we talked about where Cognitech's tools are now and what are the plans for the future of Cognitech's offerings. It has been a while since I've used Cognitech's tools. I'm looking forward to getting to know the new versions and the new products. Hopefully, if Dr. Rudin agrees to allow it, I'll be showcasing some of them in future blog posts.

Have a good week my friends.

Friday, August 30, 2019

Yes, you do need stats ... actually

Yesterday, I received the good news that my validation study of how a course in statistics could improve the statistical literacy of digital / multimedia forensic analysts when delivered on-line as micro-learning was published by the Chartered Society of Forensic Science in the UK. I got excited and put the good news on my LinkedIn feed.
Along with the usual emoji responses, I received the comment shown below.
Rather than simply comment there, I'd like to take the opportunity to illustrate the many ways in which it's not just me who says that the world of the digital evidence analyst can benefit from a solid foundation in statistics.

You see, the course was created because the relevant government bodies around the world have said, on a rather regular basis, that the investigative services and the forensic sciences need a solid foundation in statistics.

Starting at the US government level, there's the PCAST Report from 2016 (link): "NIST has also taken steps to address this issue by creating a new Forensic Science Center of Excellence, called the Center for Statistics and Applications in Forensic Evidence (CSAFE), that will focus its research efforts on improving the statistical foundation for latent prints, ballistics, tiremarks, handwriting, bloodstain patterns, toolmarks, pattern evidence analyses, and for computer and information systems, mobile devices, network traffic, social media, and GPS digital evidence analyses." (emphasis is mine)

CSAFE has already responded with some tools for digital forensic analysts (link).  The ASSOCR tool will help analysts "determine if two temporal event streams are from the same source by through this R package that implements a score-based likelihood ratio and coincidental match probability methods."
The HEISENBRGR toolset can be used to "match accounts on anonymous marketplaces, to figure out which of them belong to the same sellers."
What about NIST? What is the issue that NIST is taking steps to address? The PCAST report notes, "The 2009 NRC report called for studies to test whether various forensic methods are foundationally valid, including performing empirical tests of the accuracy of the results. It also called for the creation of a new, independent Federal agency to provide needed oversight of the forensic science system; standardization of terminology used in reporting and testifying about the results of forensic sciences; the removal of public forensic laboratories from the administrative control of law enforcement agencies; implementation of mandatory certification requirements for practitioners and mandatory accreditation programs for laboratories; research on human observer bias and sources of human error in forensic examinations; the development of tools for advancing measurement, validation, reliability, and proficiency testing in forensic science; and the strengthening and development of graduate and continuous education and training programs."

It's that last bit that prompted me to design and validate an instructional program in statistics for forensic analysts. But, it's the first sentence that speaks to the comment from LinkedIn. Analysts don't deal in absolutes or definite - binary. The world of the computer program may be binary, but the world certainly isn't. There is a natural variability to be found everywhere. But more to the comment's point, how does an analyst know that their "various forensic methods are foundationally valid, including performing empirical tests of the accuracy of the results."

Ahh... but, you're saying, all of your support is from the United States. It doesn't apply to the rest of the world. In that, you're wrong. Let's look at the UK.

In September 2018, Members of the Royal Statistical Society Statistics & Law section (link) submitted evidence (link) to a House of Lords Science and Technology Committee inquiry on Forensic Science. Question 2 asked, "what are the current strengths and weaknesses of forensic science in support of justice?" Here's the RSS' response. Notice the imbalance between strengths and weaknesses.
I've highlighted the relevant section as it relates to this topic. "poor quality of probabilistic reasoning and statistical evidence, for example, providing irrelevant information because the correct question is not asked. For example, an expert focused on the rarity of an event, rather than considering two competing explanations of an event."

Our course on statistics for forensic analysts seeks to teach probabilistic reasoning, exploring the differences between objective and subjective statistics, as well as the fact that most of the forensic sciences currently work in the wold of abductive reasoning (taking your best shot).

Now there's the accusation that digital analysts are often engaged in "push button forensics." We buy tools from vendors and hope that they're fit for purpose and accurate in their results. But are they? We don't know, so we validate our tools (hopefully). If you're trusting the market to deliver reliable, valid, and accurate tools, you may be disappointed. As the above referenced report notes, "What can be learned from the use of forensic science overseas? Seen from continental Europe, there has been a loss of an established institution (FSS) with a profound body of knowledge. Now research seems scattered among different actors (mainly academic), as commercial providers might have other priorities and limited resources to invest in fundamental research." (emphasis mine)

To the Royal Society's point, if you're a digital analyst and there's a challenge to your conclusions or opinions, on what do you base your response or your work? For example, you've retrieved photos from a computer or phone. Your tool automatically hashes the files. But, a cryptographic hash does not guarantee the authenticity of the file, only places a unique value into the process to handle questions of integrity. How do you conduct an authenticity examination without a knowledge of statistics? You can't. How do you validate your tools without a knowledge of statistics? You can't.

Over in Australia (link), there is agreement on the need for training and research - just what I've presented. "There is however one aspect of the report with which the Society is in complete agreement; the need for both continuous training and research in forensic science. We are also aware of the lack of funding for this research and therefore support the recommendation of PCAST that this is essential if our science is to continue to develop into the future."

To conclude, yes, you do need training / education in statistics if you're engaged in any forensic science discipline. Many practitioners arrive in their fields with advanced college degrees and thus will have had exposure to stats in college. But, on the digital / multimedia side, many arrive in their fields from the ranks of the visible policing services. They may not have a college degree. They may only have tool-specific training and may be completely unaware of the many issues surrounding their discipline. It's for this group that I've designed, created, and now validated my stats class. It's made in the US, to be sure, but it's informed by the global resources listed in this post - and many others.

I hope to see you in class.