Wednesday, July 4, 2018

How would you know?

It's been a while since I last presented at a LEVA conference. This time, I'm going to presenting a topic that features some rather interesting information for Forensic Multimedia Analysts.

In editing the Session Descriptions, LEVA's Training Coordinator has seen fit to pay a visit to my web page and lift a bit of information about my educational journey to add to the Speaker's biography that was submitted. That's fine. I'll play along. In this article, I'll illustrate what I've learned along the way to earning the degrees listed in my bio. It's what I've learned along the way that will be the feature of my LEVA talk - introduced here.

Yes, like many in law enforcement (including at least one of my fellow presenters at the Conference), I have degrees in Organizational Leadership. This is a solid degree choice for anyone aspiring to leadership in their organization, public or private. The difference between a "management" degree, like an MBA, and a "leadership" degree like mine (BOL / MOL) is quite simple actually. Managers correct things that have gone wrong. Leaders help things go right in the first place. I happen to have received my degrees (BOL and MOL) from a 130+ year old brick-and-mortar business school. Earning a business degree from a long-established business school leaves you with an incredible foundation in business principles. So what? What does that have to do with Forensic Multimedia Analysis?

Here's the "so what" answer. Let's examine the business of DVR manufacturing from the standpoint of determining the DVR's purpose and if it fulfills its purpose. Attempting to identify purpose / fit for purpose of the parts in the recording chain is one of the elements of the Content Triage step in the processing workflow. Why did the device produce a recording of five white pixels in the area where you were expecting to see a license plate? Understanding purpose helps answer these "why" questions.

What is the purpose of a generic Chinese 4 channel DVR? The answer is not what you think.

For our test, we'll examine a generic Chinese 4 channel DVR, the kind found at any convenience store around the US. It captured a video of a crime and now you want to use it's footage to answer questions about the events of that day. Can you trust it?

Take a DVR sold on Amazon or any big box retailer. There's the retail price, and there's the mark-up along the way to the retailer.

When you drill down through the distribution chain to the manufacturer, you find out something quite amazing, like this from

The average wholesale price of a 4 channel DVR made in China is $30 / unit. Units with more camera channels aren't much more. Units without megapixel recording capability are a bit less. This price is offered with the manufacturer's profit built in. Given that the wholesale price includes a minimum of 100% markup from cost, and that there is a labor and fixed costs involved, the average Chinese DVR is simply a $7 box of parts. The composition of that box of parts is entirely dependent upon what's in the supply chain on the day the manufacturing order was placed. That day's run may feature encoding chips from multiple manufacturers, as an example. The manufacturer does not know which unit has chips from a particular manufacture - and doesn't care as long as it "works."

What's the purpose of this DVR? The purpose has nothing to do with recording your event. The purpose is to make about $15 in profit for the manufacturer whilst spending about $15 on parts, labor, and overhead. Check again for 4 channel DVRs on There's more than 2500 different manufacturers in China offering a variety of specs within this space ... all making money with their $7 box of parts.

Let's say the $7 of parts at your crime scene recorded your event at 4CIF. You are asked to make some determination that involves time. You'll want to know if you can trust your $7 box of parts to accurately record time. How would you know?

One of the more popular DVR brands out west is Samsung. But, Samsung doesn't exist as such anymore. Samsung Techwin (Samsung's CCTV business unit) was sold to Hanwha Group a few years ago and is now sold as Hanwha Techwin (Samsung Techwin) in the US. Where does Hanwha get their $7's worth of parts within the supply chain? China, for the most part. China can make DVR parts a lot cheaper than their Korean counterparts.

Here's the specs from a Hanwha Techwin HRD-440.

This model, recording at 4CIF, for example, can record UP TO 120fps across all of it's channels. UP TO means it's max potential recording rate. It does not mean it's ACTUAL recording rate at the time of the event in question. The "up to" language is placed there to protect the manufacturer of this $7 box of parts against performance claims. If it was a Swiss chronometer, it wouldn't need the disclaiming language. But, it's not a Swiss chronometer - it's a $7 box of parts.

What does the recording performance of the channel in question in the specific evidentiary DVR look like when it alone is under load (maximum potential recording rate)? What about the recording performance of the channel in question (at max) when the other channels move in and out of their own maximum potential recording rate? What happens within the system when all channels are at the max? Remember also that systems like these allow for non-event recording to happen at lower resolutions than event recording (alarm / motion). How does the system respond when a channel or all channels are switching resolutions up / down? How does what's happening internally compare with the files that are output to .avi or .sec files? How do these compare to data that's retrieved and processed via direct acquisition of the hard drive?

How would you know? You would build a performance model. How would you do that if you have no experience? I'll introduce you to experimental science in San Antonio - at the LEVA conference. Experimental science is the realm of any with a PhD, regardless the discipline (this is where Arizona v Romero comes into play). If you think the LEVA Certification Board is a tough group, try defending a dissertation.

Why a PhD in Education, you might ask. Three reasons. There are no PhDs in Forensic Multimedia Analysis for one. The second reason, and the subject of my dissertation, deals with the environment on campus and in the classroom that causes such a great number of otherwise well qualified people to arrive on campus and suddenly and voluntarily quit (withdraw). The results of my research can be applied to help colleges configure their classes and their curriculum, as well as to train professors to accommodate a diverse range of students - including mature adults with a wealth of knowledge who arrive in class with fully formed and sincerely held opinions. The third reason has to do with a charity that I founded a few years ago to help bring STEM educational help to an underserved community and population of learners in the mountain communities of northern Los Angeles / southern Kern counties in California.

Imagine that you've been told by your chain of command that you must have certain level of education to promote at your agency. That's what happened to me. I was minding my own business with a AS in Political Science that I cobbled together after my college football career, such as it was, crashed and burned after injury. I later found myself in police service when these new rules were instituted. But, thankfully, our local Sheriff had approached the local schools promising butts in seats if they'd only reduce their tuition. So I finished my Bachelors degree at an esteemed B-school for $7k and stayed there for a MOL for only $9k. The PhD path wasn't cheap, but it was significantly cheaper than it would have been without the Sheriff's office's help. As to why I chose to go all the way to PhD, that was the level of education necessary to make more pensionable money had I decided to switch from being a technician making more than half-again my salary in overtime (which isn't pensionable, sadly) to management. But, I digress. Back to work, Jim.

Sparing you the lecture on time and temporality here, the basic tenet of experimental science is that you can only measure "now." If you want to know what happened / will happen, you need to build a model. Meteorologists build a model of future environmental patterns to forecast the weather for next week. They don't measure next week's weather properties today. The same hold true across the sciences. Moneyball was a Quant's attempt to model behavior in order to achieve a future advantage in sports.

When modeling performance, it's important to use valid tools and to control for all variables (as best as possible). At a minimum, it's important to know how your tools are working and how to not only interpret the results produced but to spot issues of concern within the results.

As an example, pretty much everyone in this space is familiar with FFMPEG and it's various parts. Let's say that you use the command line version to analyze the stream and container of the .avi file from our example DVR (it's all you have to work with). It's an NTSC DVR and the results from your analysis tool indicate a frames per second (fps) of 25. Is this correct? Would you necessarily expect 25fps from an NTSC DVR? Is this FFMPEG's default when there's no fps information in the file (it's a European tool after all)? Does total frames / time = 25fps? If yes, you're fine. If not, what do you do? You test.

Is your single evidentiary file (sample size = 1) sufficient to generalize the performance of your $7 box of parts? Of course not. In order to know how many samples are needed to generalize the results across the population of files from this specific DVR, you need to test - to build a performance model. How many unique tests will gain you the appropriate number of samples from which to build your model? Well, that depends on the question, the variables, and the analysts' tolerance for error ... and that's the focus of my talk at the LEVA conference.

The information from my workshop plugs in rather nicely with many of the other presentations on offer at the Conference. There's a rather synergistic menu from which to choose from this year. Many presentations will feature how-to's of different techniques. Mine will show you how to identify the variables within those exercises, as well as how many repetitions of the tests will be needed at a minimum to validate your attempts at these new techniques.

I hope to see you there. :)

Tuesday, July 3, 2018

LEVA 2018 Conference - corrections

It's time to start planning for the next LEVA Conference. This time, the tour stops in San Antonio, TX.

The schedule's out and it looks like I'll be presenting on the morning of Wednesday, November 7, 2018. I'll be presenting my latest paper entitled Sample Size Calculation for Forensic Multimedia Analysis: the quantitative foundations of experimental science.

Abstract: The 2009 National Academy of Sciences report, Strengthening Forensic Science in the United States – A Path Forward, outlined specific structural deficits in the practice of forensic science in the US. A few years later, the Organization of Scientific Area Committees on Forensic Science (OSAC) was created within the US Department of Commerce (NIST) to address the issues raised and to publish standards in all of the recognized disciplines. Forensic Multimedia Analysis falls within the scope of the Digital / Multimedia Area Committee. In 2017, in an attempt to harmonize the various definitions of “forensic science,” the OSAC’s Task Group on Digital/Multimedia Science produced the following  consensus definition, “Forensic science is the systematic and coherent study of traces to address questions of authentication, identification, classification, reconstruction, and evaluation for a legal context.” In clarifying the definition, they noted, “[a] trace is any modification, subsequently observable, resulting from an event.” An impression left behind is certainly “a trace,” as is biological materials; but so is the recording of a person or a thing a trace of their presence at a scene.

In harmonizing practices across the comparative sciences, it has been recommended that all involved in the work have some familiarity with quantitative analysis and experimental science. This is evidenced in a recent Arizona Supreme Court case, Az. v Romero. In presenting this paper, “Sample Size Calculation for Forensic Multimedia Analysis: the quantitative foundations of experimental science,” I will introduce the science of quantitative analysis in general and sample size calculations in particular as they relate to three common examinations performed by forensic multimedia analysts. Attendees will learn the basics of experimental science and quantitative analysis as well as a detailed information on the calculation of the sample sizes necessary for many analytical experiments. The quantitative underpinnings of “blind” image authentication, forensic photographic comparison, and speed calculations from DME evidence will be presented and explored.

How many samples would you need for a 99% confidence in your conclusions that result from a “blind” image authentication exam? Hint: the answer isn’t 1 (the evidence image). Depending on the examination, and the evidence type, the number of samples varies. In this module, you will learn how to determine the appropriate number of samples for a particular exam as well as how to explain and defend your results.


My reason for this post? Why post the complete abstract here? It was edited in the Session Descriptions on the LEVA web site, removing some vital information and shifting the context a bit. Also, there were mis-statements made in my bio below the Session Description that incorrectly listed the duration of my employment at the LAPD as well as naming me the "founder" of the multimedia lab there. I'm posting the complete description as well as my professional biography to correct the record, in case a correction isn't made to the LEVA site.


Jim Hoerricks' Professional Biography:

Jim Hoerricks, PhD, is the Director of Customer Support and Training (North America) at Amped Software, Inc.

Previously, Jim was the Senior Forensic Multimedia Analyst for the Los Angeles Police Department. Jim co-founded the LAPD’s forensic multimedia laboratory in 2002 and helped set the standard for its handling of this unique type of evidence.

Jim is the author of the best-selling book, Forensic Photoshop, and a co-author of Best Practices for the Retrieval of Video Evidence from Digital CCTV Systems (DCCTV Guide). Jim also serves on the Organization of Scientific Area Committees for Forensic Science’s (OSAC) Video/Imaging Technology and Analysis (VITAL) subcommittee as the Video Task Group Chair.


Now, that's sorted. See you in November in San Antonio.

Friday, May 11, 2018

Report writing in forensic multimedia analysis

You've analyzed evidence. You've made a few notes along the way. You've turned those notes over to the process. Your agency doesn't have a specific requirement about what should be in your notes or your report or how detailed they should be. In all the cases that you've worked, you've never been asked for specifics / details.

Now, your case has gone to trial. An attorney is seeking to qualify you to provide expert (opinion) testimony. They introduce you, your qualifications, and what you've been asked to do. The judge may or may not declare you to be an expert so that your opinion can be heard.

As a brief aside, your title or job description can vary widely. I've been an analyst, specialist, director, etc. FRE Rule 702, and the similar rule in your state's evidence code, governs  your testimonial experience. Here's the bottom line: according to evidence code, you're not an "expert" unless the Judge says so, and then only for the duration of your testimony in that case. After you're dismissed, you go back to being an analyst, specialist, etc. You may have specific expertise, and that's great. But the assignment of the title of "expert" as relates to this work is generally done by the judge in a specific case, related to the type of testimony that will be offered.

A technician generally offers testimony about a procedure and the results of the procedure. No opinion is given. "I pushed the button and the DVR produced these files."

An expert generally offers opinion based testimony about the results of an experiment or test. "I've conducted a measurement experiment and in my opinion, the unknown subject in the video at the aforementioned date/time is 6’2” tall, with an error of ..."

Everything's OK ... until it's not. You've been qualified as an expert. Is your report ready for trial? What should be in a report anyway?

First off, there's two types of guidance in answering this question. The first type, people's experiences, might help. But, then again, it might not. Just because someone got away with it, doesn't make it a standard practice. Just because you've been through a few trials doesn't make your way "court qualified." These are marketing gimmicks, not standard practices. The second type, a Standard Practice, comes from a standards body like the ASTM. As opposed to the SWG's, who produce guidelines (it would be nice if you ...), standards producing bodies like the ASTM produce standards (you must/shall). For the discipline of Forensic Multimedia Analysis, there are quite a few standards which govern our work. Here's a few of the more important ones:

  • E860-07. Standard Practice for Examining And Preparing Items That Are Or May Become Involved In Criminal or Civil Litigation
  • E1188-11. Standard Practice for Collection and Preservation of Information and Physical Items by a Technical Investigator
  • E1459-13. Standard Guide for Physical Evidence Labeling and Related Documentation
  • E1492-11. Standard Practice for Receiving, Documenting, Storing, and Retrieving Evidence in a Forensic Science Laboratory
  • E2825-12(17). Standard Guide for Forensic Digital Image Processing

Did your retrieval follow E1188-11? Did your preparation of the evidence items follow E860-07? Did you assign a unique identifier to each evidence item and label it according to E1459-13? Does your workplace handle evidence according to E1492-11? Did your work on the evidence items follow E2825-12?

If you're not even aware of these standards, how will you answer the questions under direct / cross examination?

Taking a slight step back, and adding more complexity, you're engaged in a forensic science discipline. You're doing science. Science has rules and requirements as well. A scientist's report, in general, is structured in the same way. Go search scientific reports and papers in Google Scholar or ProQuest. The contents and structure of the reports you'll find are governed by the accredited institution. I've spent the last 8 years working in the world of experimental science, conducting experiments, testing data, forming conclusions, and writing reports. The structure for my work was found in the school's guidance documentation and enforced by the school's administrative staff.

How do we know we're doing science? Remember the NAS Report? The result of the NAS Report was the creation of the Organization of Scientific Area Committees for Forensic Science about 5 years ago. The OSAC has been hard at work refining guidelines and producing standards. Our discipline falls within the Video / Image Technology and Analysis (VITAL) Subcommittee. In terms of disclosure, I've been involved with the OSAC since it's founding and currently serve as the Video Task Group Chair within VITAL. But, this isn't an official statement by/for them. Of course, it's me (as me) trying to be helpful, as usual. :)

Last year, an OSAC group issued a new definition of forensic science that can be used for all forensic science disciplines. Here it is:

Forensic science is the systematic and coherent study of traces to address questions of authentication, identification, classification, reconstruction, and evaluation for a legal context. Source: A Framework to Harmonize Forensic Science Practices and Digital/Multimedia Evidence. OSAC Task Group on Digital/Multimedia Science. 2017

What is a trace? A trace is any modification, subsequently observable, resulting from an event. You walk within the view of a CCTV system, you leave a trace of your presence within that system.

Thus it is that we're engaged in science. Should we not structure our reports in the same way, using the available guidance as to how they should look? Of course. But what would that look like?

Let's assume that your report has a masthead / letterhead with your/your agency's name and contact information. Here's the structure of a report that (properly completed) will conform to the ASTM standards and the world of experimental science.

Administrative Information
     Examiner Information
     Requestor Information
     Unique Evidence Control Number(s)
     Chain of Custody Information
Summary of Request
     Service Requested (e.g. photogrammetry, authentication, change of format, etc.)
     Equipment List
     Experimental Design / Proposed Workflow
Limitations / Delimitations
     Delimitations of the Experiment
     Limitations in the Data
     Personnel Delimitations / Limitations
Amped FIVE Processing Report can be inserted here as it conforms to ASTM 2825-12(17).
Results / Summary
     Problems / Errors Encountered
     List of Output File(s) / Derivatives / Demonstratives
     Administrative Approval

It would generally conclude with a declaration and a signature. Something like this, perhaps:

I, __________, declare under penalty of perjury as provided in 28 U.S.C. §1746 that the foregoing is true and correct, that it is made based upon my own personal knowledge, and that I could testify to these facts if called as a witness.

Now, let's talk about the sections.

The Administrative section.

  • You're the examiner. If you have help, or someone helped you in your work, they should be listed too. Co-workers, subcontractors, etc.
  • The requestor is the case agent, investigator, or the client. The person who asked you to do the work.
  • Every item of evidence must have a unique identifier.
  • Every item received must be controlled and it's chain of custody tracked. If others accessed the item, their names would be in the evidence control report / list. DEMS and cloud storage solutions like can easily do this and produce a report.
Summary of Request
  • What was it that you were asked to do, in plain terms. For example, "Given evidence item #XXX, for date/time/camera, I was asked to determine the vehicle's make/model/year" - comparative analysis / content analysis. Or, "Given evidence item #XXX, for date/time/camera, I was asked to estimate the unknown subject's height" - photogrammetry. Or, "Given image evidence item #XXY-X, retrieved from evidence item #XXY (see attached report), I was asked to determine if the image's contextual information had been altered" - authentication.  
  • Provide an abstract of the test and the results - a brief overview of what was done and what the results were (with references to appropriate page numbers). 


  • What tools did you use - hardware / software? You may want to include a statement as to each and their purpose / fitness for that purpose. As an example, I use Amped Five. Amped Five is fit for the purpose of conducting image science experiments as it is operationalized from peer-reviewed / published image science. It's processing reports include the source documentation. 
  • Your proposed workflow. What will guide your work? Can you document it easily? Does your processing report follow this methodology? Hint, it should. Here's my workflow for Photogrammetry, Content Analysis, and Comparative Analysis. You can find it originally in my book, Forensic Photoshop. It's what I use when I work as an analyst. It's what I teach.

Limitations / Delimitations

  • Delimitations are the bounds within which your work will be conducted. I will test the image. I won't test the device that created the image.
  • With DME, there are a ton of limitations in the data. If the tested question is, what is license plate, and a macro block analysis determines that there is no original data in the area of the license plate, then that is a limitation. If the tested question is, what is the speed of the vehicle, and you don't have access to the DVR, then that is a huge limitation. Limitations must be stated.
  • Personnel issues should also be listed. Did someone else start the work that you completed? Was another person employed on the case for a specific reason? Did something limit their involvement? If the question involves the need to measure camera height at a scene, and you can't climb a ladder so you mitigated that in some way, list it. 
A side note here ... did you reach out to someone for help? Someone like the DVR's technician or the manufacturer of your analysis tool's support staff? Did they assist you? Make sure that you list their involvement. Did you send out a copy of the evidence to someone? If yes, is it within your agency's policy to release a copy of the evidence in the way that you've done so for the case? As an example, you send a still image of a vehicle to the FVA list asking for help. You receive a ton of advice that helps you form your conclusion, or helps the investigation. Did you note in your report that you interacted with the list and who helped? Did you provide a copy of the correspondence in the report package? Did you provide all of the responses or just the ones that support your conclusion? The ones that don't support your eventual conclusion should be included, with an explanation as to why you disagree. They're potentially exculpatory, and they should be addressed.

Remember, on cross examination, attorneys rarely ask questions of people blindly. They likely already know the answer and are walking your down a very specific path to a very specific desired conclusion.  Whilst an attorney might not subpoena Verint's tech support staff / communications, as an example, they may have access to the FVA list and may be aware of your communications about the case there. You may not have listed that you received help from that source, but the opposing counsel might. You won't know who's watching what source. They may ask if you've received help on the case. How would you answer if you didn't list the help and disclose the communications, all of the communications? If your agency's policy prohibits the release of case related info, and you shared case related info on the FVA list, your answer to the question now involves specific jeopardy for your career. I've been assigned to Internal Affairs, I've been an employee rep, I know how the system works when one has been accused of misconduct. How do you avoid the jeopardy? Follow your agency's policies and keep good records of your case activity.


  • These are the steps performed and the settings used. This section should read like a recipe so that some other person with similar training / equipment can reproduce your work. This is the essence of Section 4 of ASTM 2825. Amped FIVE Processing Report can be inserted here as it conforms to ASTM 2825-12(17). 

Results / Summary

  • Did you encounter any problems or errors. List them.
  • How did you validate your results? Did anyone peer review your work? This can include test/retest or other such validity exams.
  • Conclusions - your opinion goes here. This is the result of your test / experiment / analysis.
  • List of Output File(s) / Derivatives / Demonstratives


  • Examiner (your name here), along with anyone else who's work is included in the report.
  • Reviewer(s) - was your completed work reviewed? Their name(s).
  • Administrative Approval - did a supervisor approve of the completed exam?
Do your reports look like this? Does the opposing counsel analyst's report look like this? If not, why not? It may be an avenue to explore on cross examination. It's best to be prepared.

I know that this is a rather long post. But, I wanted to be rather comprehensive in presenting the topic and list the sources for the information listed. Hopefully, this proves helpful.


Friday, December 15, 2017

Sample sizes and determinations of match

It's been a busy fall season, traveling the country and training a whole bunch of folks. Over a lunch, the group I was with asked me about a case that's been in the news and wondered if we'd be discussing how to conduct a comparison of headlight spread patterns. That lead us down quite the rabbit hole ...

Comparative analysis assumes a "known" and compares it to an "unknown." It's important to consider time & temporality - that one can only TEST in the present - in the "now." For the future / past, one can only PREDICT what happened "then." Testing and Prediction have their own rules.

Take the testing of an image / video of a headlight spread pattern. One attempts to compare the "known" vs. a range of possible "unknowns." Our lunch group mentioned a case where the examiner tested about a dozen trucks in front of the CCTV system that generated the evidentiary video in addition to the vehicle in evidence to try to make a determination. The examiner did in fact determine match, as the report indicated.

The question really isn't the appropriateness of the visual comparison. The question is the appropriateness of the sample size such that the results can be useful / trusted. How did the examiner determine an appropriate sample size? Is a dozen trucks appropriate?

Individual head light direction can be adjusted. Headlights come in pairs. Thus, there are two variables that are not on/off. In the world of statistics, they're continuous variables. You're testing two continuous variables against a population of continuous variables to determine uniqueness. Is this possible in real life? What's the appropriate sample size for such a test?

I use a tool called G*Power to calculate sample size. Just about every PhD student does. It's free and quite easy to use once you learn to speak it's language. Most, like me, learn it's language in graduate level stats classes.

For example, if you've determined that an F-Test of Variance - Test of Equality is the appropriate statistical test needed to conduct your experiment, then select that test using G*Power.

Press the Calculate button, and G*Power calculates the appropriate sample size. In this case, the appropriate sample size is 266. There's a huge difference between 266 and a dozen. You can plot the results to track the increase in sample size relative to Power. If you want greater confidence in your results (Power), you need a larger sample size.

The examiner's report should include a section about how the sample size was created and why the test used to calculate it was appropriate. It should have graphics like those below to illustrate the results.

It's vitally important that when conducting a comparative exam and declaring a "match", that the examiner understands the necessary science behind that conclusion. "Match" usually does not mean "a Nissan Sentra." That's not helpful given the quantity of Nissan Sentras a given region. "Match" means "this specific Nissan Sentra." Isn't the standard, "Of all the Nissan Sentras made in that model year whithersoever dispersed around the globe, it's only this particular one and no other?"

What about the test? Did you choose the appropriate test?

What if, on the other hand, you determined that the appropriate test is a T-test like Wilson's sign-ranked test, then the sample size would be different. With that test, the appropriate sample size would be 47. That's still not a dozen.

What happens if you like the T-test and opposing counsel's examiner likes the F-test? What happens when two examiners disagree? Do you have the education, training, and confidence to defend your choice and your results in a Daubert hearing?

Perhaps you've been trained in the basics of conducting a comparative examination. But have you been trained / educated in the science of conducting experiments? Do you know how to choose the appropriate tests for your questions? Do you know how to structure your experiment? Do you know how to calculate the appropriate sample size for your tests?

To wrap up, when concluding that a particular vehicle can't be any other because you've compared the head light spread pattern in a video to several vehicles of the same model / year, it's vitally important to justify the sample size of comparators. You must choose the appropriate test and calculate the sample size based on that test. ASTM 2825-12's requirement that one must produce a report such that another similarly trained / equipped person can reproduce your work means that you must include your notes on the calculation of the sample size. If you haven't done this, you're just guessing and hoping for the best.

Friday, October 27, 2017

Forensically sound

"Is it forensically sound?"

I've heard this question asked many times since I began working in forensic analysis many years ago. Me being me, I wanted to know what it meant to be "forensically sound." Here's what I found as I took a journey through the English language.

"Forensically." The root of this is "forensic."

The root language for the English word "forensic" is the Latin "forensis." It means a public space, a market, or in open court.

Forensis means "of or pertaining to the market or forum." Another way of looking at this can be, activities that happen in the market, forum, public space, or open court.

Ok. We've got "forensically" down. What about "sound."

Sound, from the Old English, means that which is based on reason, sense, or judgement, and/or that which is competent, reliable, or holding acceptable views.

Put together, and given the context of our work, "forensically sound" can mean that activity, related to work for the court / public, which is well founded, reliable, and logical - which is based on reason, sense, or good judgement.

Great, we've now got a working definition. Now how does it apply to our efforts?

In the US, the Judge acts as the "gatekeeper." In providing this "gatekeeper" function, the Judge should weigh the foundation and reliability of the evidence being submitted in the particular case. When questions arise as to science, validity, and/or reliability, either party can ask the Judge for a hearing on the evidence and explore these issues (i.e. Daubert Hearing).

One of the ways that Judges evaluate the work is by comparing the work product to known standards. In our discipline,  we can find standards at the ASTM. For image / video processing, the standard is ASTM 2825. Taking a step back, standards are "must do" and guidelines are "may do."

Thus, if you've followed ASTM 2825 (meaning your work can be repeated), and you use valid and reliable tools, your work is "forensically sound." It's a two part evaluation - you and your tools.

Did you work in a valid, reliable, repeatable, and reproducible way? Are your tools valid and reliable? If the answer is yes to all of these, they your work is forensically sound.

In the many times that I've been asked to evaluate another person's work (i.e. from opposing counsel), this is the standard with which I work. It forms a checklist of sorts.

  • Do I have the same evidence as the opposing side? (i.e. true/exact copy)
  • Is there a written report that conforms to ASTM 2825-12? This assures that I can attempt the tests and thus attempt to reproduce their results. 
That's really it for me. Others may concentrate on training and education and certifications. I really don't. If they aren't trained / educated, it will show in their reporting. To be sure, there are avenues to explore if you have the other person's CV (verify memberships, certifications, education etc.). But, I would hope that folks wouldn't embellish their CV. It's so easy to fact check these days, why lie about something that can be easily discovered via Google?

You have a copy of the evidence and the opposing counsel's report. You attempt to reproduce the results. Two things can happen.
  1. You successfully reproduce the results and come to the same conclusion.
  2. Your results differ from that of the opposing counsel.
If the answer is #1, you're finished. Complete your report and move on. If the answer is #2, can you try to figure out the errors? Your report may include your conclusions as to what went wrong on the other side and why yours is the correct answer. 

I hope this helps...