Sunday, August 12, 2018

Scientists as Stoics?

I have made no secret of my academic pursuits. I have been an educator in the classic liberal arts and sciences for some time now. I am of the firm belief that the classics inform every aspect of adult life. I'm one of the few out there that believes that people should not hyper-specialize in their educational pursuits, but should have a broad knowledge set. Save the specialization for doctoral / post-doctoral work.

I have also made no secret of my athletic pursuits. The Russian martial art of Sambo has within it a provision of rank that factors not only your competition appearances and wins, but also your reach as a coach. How many people have you assisted in their path to success? The Russian martial art of Systema grounds one in an ethical foundation that effortlessly considers the consequences of action / non-action in everything. This mindfulness becomes a part of taking every breath. To achieve it's goals, Systema seeks to remove the practitioner from the attention of a potential threat, rather than boastfully seeking every violent encounter.

My love the many martial systems that I have studied and trained do inform my work as a forensic scientist, as does my love of the classics and the pursuit of knowledge.

It's with this in mind that I share this post with you today. I've spent a lot of time traveling this summer. I've been criss-crossing the country spreading the good news of science. I've also been stuck in airports and on the tarmac enduring endless delays. Thankfully, I have a Kindle and can engage in one of my other favorite pursuits, reading.



I came across William Ferraiolo, PhD, and his book via a friend on social media. As someone who teaches and lectures on philosophy, religion, and politics, I'm always looking for fresh insight on the classics. Meditations of Self-Discipline and Failure is just that.

It was quite refreshing to read this book, especially in light of the current social media driven culture. Everyone on LinkedIn is an "expert." Everyone on Instagram is a cultural influencer. Everyone on Facebook is having a great time eating every meal at some amazing destination. Real life, I'm afraid, isn't at all like that. I think that so many of the problems that our western culture is facing is due in large part to a loss of our connection with our history. Without a grounding in the classics, without the ability to utilize logic and reason, judging one's own life against what one sees on YouTube will not end well. Sadly, so many seek solace in a bottle or a pill when their life doesn't measure up to what they see on the screen. Tragically, many willingly choose to end their life for similar, trivial reasons. As long as one draws breath, there's always a chance of turning things around for the better. Nothing is ever truly hopeless.

I share this tragic fact with my forensic science students: all of the people whom I have known, and who willingly chose to end their lives, have been employed in the forensic sciences. Six of them. That's six too many. I share it with them ahead of informing them of the many ways that they can mitigate the vicarious trauma associated with working in this world - ways that don't include a nightly bottle of Gin.

The totality of my life informs my work in the forensic sciences. My knowledge and absorption of stoicism guides my work, reporting style, and testimonial delivery. It also helps me deal with the vile filth and foul of the criminal world. It's not about me, it's about the case, the evidence, and the facts. The case is not about me, and I do not make it so. I do not personalize the case. I do not absorb it - "I worked the ... case." I am a practitioner assisting the Trier of Fact, nothing more. It's about the results, grounded in science and the scientific method. I think others in the sciences would benefit from this approach.

All this being said, I believe Dr. Ferraiolo's Meditations on Self-Discipline and Failure: Stoic Exercise for Mental Fitness, to be a worthwhile read. Here's a quote that fits nicely within this discussion, as well as serving as commentary on recent events.

Do not become overly enamored with yourself, your abilities, or your paltry status. You are, in the grand scheme of things, a trifling, ephemeral phenomenon of little consequence. You are slightly smarter than an ape or a dolphin. If there is a Creator who has endowed you with any special status, recognize that this is a gift and not an accomplishment in which you may rightfully take pride. No one earns birth as a member of the reasoning species, or any privileges pertaining thereto. If the matter is entirely propitious, you have still less warrant for a swollen ego. Note your good fortune, but do not claim to be intrinsically good, due to a chance concatenation of molecules. Set about the business of trying to understand your place in this vast cosmos, your duties as a human being, and a method and practice leading to enlightenment—or the closest approximation you can manage. (p. 23)

On science as science, not consensus or mob rule:

Do not be swayed by the mere opinion of the masses or the majority. The truth is not determined by plebiscite. (p. 46)

On earning respect:

Do not pretend to respect other persons either more or less than you actually do respect them. You owe no one a pretense of deference, and you owe everyone the deference that they have, by your own lights, earned. You should have nothing to do with sham collegiality or faux civility. Some persons are worthy of your contempt, and their behavior, as well as other outward indications of their character, is sufficient grounds for reasonable (though not perfectly reliable) assessment of their merit. If anyone demands that you “try to get along” with any person that you do not respect, then you have grounds for reconsidering your relations with the former individual (the one issuing the demand). Do not allow yourself to be pressed, bullied, or cajoled into relations that strike you as unhealthy or pointless. (p. 9)

The book is simultaneously easily digested and incredibly disturbing. If one's goal is self-improvement, the improvement of the self will always be a painful slog. No one likes to examine one's own shortcomings and failures. But it is a very necessary pursuit. You'll end up the better for it. This book can serve as a guide to get you started down that vital path of making one's life worth living.

Every scientist should be a stoic. I believe stoicism to be an essential characteristic and a necessary defense against error and falsehood. Perhaps you don't agree. Perhaps you don't understand what I mean. If you'd like to know more, start with this book. You'll be glad that you did.


Wednesday, July 4, 2018

How would you know?

It's been a while since I last presented at a LEVA conference. This time, I'm going to presenting a topic that features some rather interesting information for Forensic Multimedia Analysts.

In editing the Session Descriptions, LEVA's Training Coordinator has seen fit to pay a visit to my web page and lift a bit of information about my educational journey to add to the Speaker's biography that was submitted. That's fine. I'll play along. In this article, I'll illustrate what I've learned along the way to earning the degrees listed in my bio. It's what I've learned along the way that will be the feature of my LEVA talk - introduced here.

Yes, like many in law enforcement (including at least one of my fellow presenters at the Conference), I have degrees in Organizational Leadership. This is a solid degree choice for anyone aspiring to leadership in their organization, public or private. The difference between a "management" degree, like an MBA, and a "leadership" degree like mine (BOL / MOL) is quite simple actually. Managers correct things that have gone wrong. Leaders help things go right in the first place. I happen to have received my degrees (BOL and MOL) from a 130+ year old brick-and-mortar business school. Earning a business degree from a long-established business school leaves you with an incredible foundation in business principles. So what? What does that have to do with Forensic Multimedia Analysis?

Here's the "so what" answer. Let's examine the business of DVR manufacturing from the standpoint of determining the DVR's purpose and if it fulfills its purpose. Attempting to identify purpose / fit for purpose of the parts in the recording chain is one of the elements of the Content Triage step in the processing workflow. Why did the device produce a recording of five white pixels in the area where you were expecting to see a license plate? Understanding purpose helps answer these "why" questions.

What is the purpose of a generic Chinese 4 channel DVR? The answer is not what you think.

For our test, we'll examine a generic Chinese 4 channel DVR, the kind found at any convenience store around the US. It captured a video of a crime and now you want to use it's footage to answer questions about the events of that day. Can you trust it?

Take a DVR sold on Amazon or any big box retailer. There's the retail price, and there's the mark-up along the way to the retailer.


When you drill down through the distribution chain to the manufacturer, you find out something quite amazing, like this from Alibaba.com.


The average wholesale price of a 4 channel DVR made in China is $30 / unit. Units with more camera channels aren't much more. Units without megapixel recording capability are a bit less. This price is offered with the manufacturer's profit built in. Given that the wholesale price includes a minimum of 100% markup from cost, and that there is a labor and fixed costs involved, the average Chinese DVR is simply a $7 box of parts. The composition of that box of parts is entirely dependent upon what's in the supply chain on the day the manufacturing order was placed. That day's run may feature encoding chips from multiple manufacturers, as an example. The manufacturer does not know which unit has chips from a particular manufacture - and doesn't care as long as it "works."

What's the purpose of this DVR? The purpose has nothing to do with recording your event. The purpose is to make about $15 in profit for the manufacturer whilst spending about $15 on parts, labor, and overhead. Check again for 4 channel DVRs on Alibaba.com. There's more than 2500 different manufacturers in China offering a variety of specs within this space ... all making money with their $7 box of parts.

Let's say the $7 of parts at your crime scene recorded your event at 4CIF. You are asked to make some determination that involves time. You'll want to know if you can trust your $7 box of parts to accurately record time. How would you know?

One of the more popular DVR brands out west is Samsung. But, Samsung doesn't exist as such anymore. Samsung Techwin (Samsung's CCTV business unit) was sold to Hanwha Group a few years ago and is now sold as Hanwha Techwin (Samsung Techwin) in the US. Where does Hanwha get their $7's worth of parts within the supply chain? China, for the most part. China can make DVR parts a lot cheaper than their Korean counterparts.

Here's the specs from a Hanwha Techwin HRD-440.


This model, recording at 4CIF, for example, can record UP TO 120fps across all of it's channels. UP TO means it's max potential recording rate. It does not mean it's ACTUAL recording rate at the time of the event in question. The "up to" language is placed there to protect the manufacturer of this $7 box of parts against performance claims. If it was a Swiss chronometer, it wouldn't need the disclaiming language. But, it's not a Swiss chronometer - it's a $7 box of parts.

What does the recording performance of the channel in question in the specific evidentiary DVR look like when it alone is under load (maximum potential recording rate)? What about the recording performance of the channel in question (at max) when the other channels move in and out of their own maximum potential recording rate? What happens within the system when all channels are at the max? Remember also that systems like these allow for non-event recording to happen at lower resolutions than event recording (alarm / motion). How does the system respond when a channel or all channels are switching resolutions up / down? How does what's happening internally compare with the files that are output to .avi or .sec files? How do these compare to data that's retrieved and processed via direct acquisition of the hard drive?

How would you know? You would build a performance model. How would you do that if you have no experience? I'll introduce you to experimental science in San Antonio - at the LEVA conference. Experimental science is the realm of any with a PhD, regardless the discipline (this is where Arizona v Romero comes into play). If you think the LEVA Certification Board is a tough group, try defending a dissertation.

Why a PhD in Education, you might ask. Three reasons. There are no PhDs in Forensic Multimedia Analysis for one. The second reason, and the subject of my dissertation, deals with the environment on campus and in the classroom that causes such a great number of otherwise well qualified people to arrive on campus and suddenly and voluntarily quit (withdraw). The results of my research can be applied to help colleges configure their classes and their curriculum, as well as to train professors to accommodate a diverse range of students - including mature adults with a wealth of knowledge who arrive in class with fully formed and sincerely held opinions. The third reason has to do with a charity that I founded a few years ago to help bring STEM educational help to an underserved community and population of learners in the mountain communities of northern Los Angeles / southern Kern counties in California.

Imagine that you've been told by your chain of command that you must have certain level of education to promote at your agency. That's what happened to me. I was minding my own business with a AS in Political Science that I cobbled together after my college football career, such as it was, crashed and burned after injury. I later found myself in police service when these new rules were instituted. But, thankfully, our local Sheriff had approached the local schools promising butts in seats if they'd only reduce their tuition. So I finished my Bachelors degree at an esteemed B-school for $7k and stayed there for a MOL for only $9k. The PhD path wasn't cheap, but it was significantly cheaper than it would have been without the Sheriff's office's help. As to why I chose to go all the way to PhD, that was the level of education necessary to make more pensionable money had I decided to switch from being a technician making more than half-again my salary in overtime (which isn't pensionable, sadly) to management. But, I digress. Back to work, Jim.

Sparing you the lecture on time and temporality here, the basic tenet of experimental science is that you can only measure "now." If you want to know what happened / will happen, you need to build a model. Meteorologists build a model of future environmental patterns to forecast the weather for next week. They don't measure next week's weather properties today. The same hold true across the sciences. Moneyball was a Quant's attempt to model behavior in order to achieve a future advantage in sports.

When modeling performance, it's important to use valid tools and to control for all variables (as best as possible). At a minimum, it's important to know how your tools are working and how to not only interpret the results produced but to spot issues of concern within the results.

As an example, pretty much everyone in this space is familiar with FFMPEG and it's various parts. Let's say that you use the command line version to analyze the stream and container of the .avi file from our example DVR (it's all you have to work with). It's an NTSC DVR and the results from your analysis tool indicate a frames per second (fps) of 25. Is this correct? Would you necessarily expect 25fps from an NTSC DVR? Is this FFMPEG's default when there's no fps information in the file (it's a European tool after all)? Does total frames / time = 25fps? If yes, you're fine. If not, what do you do? You test.

Is your single evidentiary file (sample size = 1) sufficient to generalize the performance of your $7 box of parts? Of course not. In order to know how many samples are needed to generalize the results across the population of files from this specific DVR, you need to test - to build a performance model. How many unique tests will gain you the appropriate number of samples from which to build your model? Well, that depends on the question, the variables, and the analysts' tolerance for error ... and that's the focus of my talk at the LEVA conference.

The information from my workshop plugs in rather nicely with many of the other presentations on offer at the Conference. There's a rather synergistic menu from which to choose from this year. Many presentations will feature how-to's of different techniques. Mine will show you how to identify the variables within those exercises, as well as how many repetitions of the tests will be needed at a minimum to validate your attempts at these new techniques.

I hope to see you there. :)

Tuesday, July 3, 2018

LEVA 2018 Conference - corrections

It's time to start planning for the next LEVA Conference. This time, the tour stops in San Antonio, TX.

The schedule's out and it looks like I'll be presenting on the morning of Wednesday, November 7, 2018. I'll be presenting my latest paper entitled Sample Size Calculation for Forensic Multimedia Analysis: the quantitative foundations of experimental science.

Abstract: The 2009 National Academy of Sciences report, Strengthening Forensic Science in the United States – A Path Forward, outlined specific structural deficits in the practice of forensic science in the US. A few years later, the Organization of Scientific Area Committees on Forensic Science (OSAC) was created within the US Department of Commerce (NIST) to address the issues raised and to publish standards in all of the recognized disciplines. Forensic Multimedia Analysis falls within the scope of the Digital / Multimedia Area Committee. In 2017, in an attempt to harmonize the various definitions of “forensic science,” the OSAC’s Task Group on Digital/Multimedia Science produced the following  consensus definition, “Forensic science is the systematic and coherent study of traces to address questions of authentication, identification, classification, reconstruction, and evaluation for a legal context.” In clarifying the definition, they noted, “[a] trace is any modification, subsequently observable, resulting from an event.” An impression left behind is certainly “a trace,” as is biological materials; but so is the recording of a person or a thing a trace of their presence at a scene.

In harmonizing practices across the comparative sciences, it has been recommended that all involved in the work have some familiarity with quantitative analysis and experimental science. This is evidenced in a recent Arizona Supreme Court case, Az. v Romero. In presenting this paper, “Sample Size Calculation for Forensic Multimedia Analysis: the quantitative foundations of experimental science,” I will introduce the science of quantitative analysis in general and sample size calculations in particular as they relate to three common examinations performed by forensic multimedia analysts. Attendees will learn the basics of experimental science and quantitative analysis as well as a detailed information on the calculation of the sample sizes necessary for many analytical experiments. The quantitative underpinnings of “blind” image authentication, forensic photographic comparison, and speed calculations from DME evidence will be presented and explored.

How many samples would you need for a 99% confidence in your conclusions that result from a “blind” image authentication exam? Hint: the answer isn’t 1 (the evidence image). Depending on the examination, and the evidence type, the number of samples varies. In this module, you will learn how to determine the appropriate number of samples for a particular exam as well as how to explain and defend your results.

---

My reason for this post? Why post the complete abstract here? It was edited in the Session Descriptions on the LEVA web site, removing some vital information and shifting the context a bit. Also, there were mis-statements made in my bio below the Session Description that incorrectly listed the duration of my employment at the LAPD as well as naming me the "founder" of the multimedia lab there. I'm posting the complete description as well as my professional biography to correct the record, in case a correction isn't made to the LEVA site.

---

Jim Hoerricks' Professional Biography:

Jim Hoerricks, PhD, is the Director of Customer Support and Training (North America) at Amped Software, Inc.

Previously, Jim was the Senior Forensic Multimedia Analyst for the Los Angeles Police Department. Jim co-founded the LAPD’s forensic multimedia laboratory in 2002 and helped set the standard for its handling of this unique type of evidence.

Jim is the author of the best-selling book, Forensic Photoshop, and a co-author of Best Practices for the Retrieval of Video Evidence from Digital CCTV Systems (DCCTV Guide). Jim also serves on the Organization of Scientific Area Committees for Forensic Science’s (OSAC) Video/Imaging Technology and Analysis (VITAL) subcommittee as the Video Task Group Chair.

---

Now, that's sorted. See you in November in San Antonio.

Friday, May 11, 2018

Report writing in forensic multimedia analysis

You've analyzed evidence. You've made a few notes along the way. You've turned those notes over to the process. Your agency doesn't have a specific requirement about what should be in your notes or your report or how detailed they should be. In all the cases that you've worked, you've never been asked for specifics / details.

Now, your case has gone to trial. An attorney is seeking to qualify you to provide expert (opinion) testimony. They introduce you, your qualifications, and what you've been asked to do. The judge may or may not declare you to be an expert so that your opinion can be heard.

As a brief aside, your title or job description can vary widely. I've been an analyst, specialist, director, etc. FRE Rule 702, and the similar rule in your state's evidence code, governs  your testimonial experience. Here's the bottom line: according to evidence code, you're not an "expert" unless the Judge says so, and then only for the duration of your testimony in that case. After you're dismissed, you go back to being an analyst, specialist, etc. You may have specific expertise, and that's great. But the assignment of the title of "expert" as relates to this work is generally done by the judge in a specific case, related to the type of testimony that will be offered.

A technician generally offers testimony about a procedure and the results of the procedure. No opinion is given. "I pushed the button and the DVR produced these files."

An expert generally offers opinion based testimony about the results of an experiment or test. "I've conducted a measurement experiment and in my opinion, the unknown subject in the video at the aforementioned date/time is 6’2” tall, with an error of ..."

Everything's OK ... until it's not. You've been qualified as an expert. Is your report ready for trial? What should be in a report anyway?

First off, there's two types of guidance in answering this question. The first type, people's experiences, might help. But, then again, it might not. Just because someone got away with it, doesn't make it a standard practice. Just because you've been through a few trials doesn't make your way "court qualified." These are marketing gimmicks, not standard practices. The second type, a Standard Practice, comes from a standards body like the ASTM. As opposed to the SWG's, who produce guidelines (it would be nice if you ...), standards producing bodies like the ASTM produce standards (you must/shall). For the discipline of Forensic Multimedia Analysis, there are quite a few standards which govern our work. Here's a few of the more important ones:

  • E860-07. Standard Practice for Examining And Preparing Items That Are Or May Become Involved In Criminal or Civil Litigation
  • E1188-11. Standard Practice for Collection and Preservation of Information and Physical Items by a Technical Investigator
  • E1459-13. Standard Guide for Physical Evidence Labeling and Related Documentation
  • E1492-11. Standard Practice for Receiving, Documenting, Storing, and Retrieving Evidence in a Forensic Science Laboratory
  • E2825-12(17). Standard Guide for Forensic Digital Image Processing

Did your retrieval follow E1188-11? Did your preparation of the evidence items follow E860-07? Did you assign a unique identifier to each evidence item and label it according to E1459-13? Does your workplace handle evidence according to E1492-11? Did your work on the evidence items follow E2825-12?

If you're not even aware of these standards, how will you answer the questions under direct / cross examination?

Taking a slight step back, and adding more complexity, you're engaged in a forensic science discipline. You're doing science. Science has rules and requirements as well. A scientist's report, in general, is structured in the same way. Go search scientific reports and papers in Google Scholar or ProQuest. The contents and structure of the reports you'll find are governed by the accredited institution. I've spent the last 8 years working in the world of experimental science, conducting experiments, testing data, forming conclusions, and writing reports. The structure for my work was found in the school's guidance documentation and enforced by the school's administrative staff.

How do we know we're doing science? Remember the NAS Report? The result of the NAS Report was the creation of the Organization of Scientific Area Committees for Forensic Science about 5 years ago. The OSAC has been hard at work refining guidelines and producing standards. Our discipline falls within the Video / Image Technology and Analysis (VITAL) Subcommittee. In terms of disclosure, I've been involved with the OSAC since it's founding and currently serve as the Video Task Group Chair within VITAL. But, this isn't an official statement by/for them. Of course, it's me (as me) trying to be helpful, as usual. :)

Last year, an OSAC group issued a new definition of forensic science that can be used for all forensic science disciplines. Here it is:

Forensic science is the systematic and coherent study of traces to address questions of authentication, identification, classification, reconstruction, and evaluation for a legal context. Source: A Framework to Harmonize Forensic Science Practices and Digital/Multimedia Evidence. OSAC Task Group on Digital/Multimedia Science. 2017

What is a trace? A trace is any modification, subsequently observable, resulting from an event. You walk within the view of a CCTV system, you leave a trace of your presence within that system.

Thus it is that we're engaged in science. Should we not structure our reports in the same way, using the available guidance as to how they should look? Of course. But what would that look like?

Let's assume that your report has a masthead / letterhead with your/your agency's name and contact information. Here's the structure of a report that (properly completed) will conform to the ASTM standards and the world of experimental science.

Administrative Information
     Examiner Information
     Requestor Information
     Unique Evidence Control Number(s)
     Chain of Custody Information
Summary of Request
     Service Requested (e.g. photogrammetry, authentication, change of format, etc.)
Methodology
     Equipment List
     Experimental Design / Proposed Workflow
Limitations / Delimitations
     Delimitations of the Experiment
     Limitations in the Data
     Personnel Delimitations / Limitations
Processing
Amped FIVE Processing Report can be inserted here as it conforms to ASTM 2825-12(17).
Results / Summary
     Problems / Errors Encountered
     Validation
     Conclusions
     List of Output File(s) / Derivatives / Demonstratives
Approval(s)
     Examiner
     Reviewer
     Administrative Approval

It would generally conclude with a declaration and a signature. Something like this, perhaps:

I, __________, declare under penalty of perjury as provided in 28 U.S.C. §1746 that the foregoing is true and correct, that it is made based upon my own personal knowledge, and that I could testify to these facts if called as a witness.

Now, let's talk about the sections.

The Administrative section.

  • You're the examiner. If you have help, or someone helped you in your work, they should be listed too. Co-workers, subcontractors, etc.
  • The requestor is the case agent, investigator, or the client. The person who asked you to do the work.
  • Every item of evidence must have a unique identifier.
  • Every item received must be controlled and it's chain of custody tracked. If others accessed the item, their names would be in the evidence control report / list. DEMS and cloud storage solutions like Evidence.com can easily do this and produce a report.
Summary of Request
  • What was it that you were asked to do, in plain terms. For example, "Given evidence item #XXX, for date/time/camera, I was asked to determine the vehicle's make/model/year" - comparative analysis / content analysis. Or, "Given evidence item #XXX, for date/time/camera, I was asked to estimate the unknown subject's height" - photogrammetry. Or, "Given image evidence item #XXY-X, retrieved from evidence item #XXY (see attached report), I was asked to determine if the image's contextual information had been altered" - authentication.  
  • Provide an abstract of the test and the results - a brief overview of what was done and what the results were (with references to appropriate page numbers). 

Methodology

  • What tools did you use - hardware / software? You may want to include a statement as to each and their purpose / fitness for that purpose. As an example, I use Amped Five. Amped Five is fit for the purpose of conducting image science experiments as it is operationalized from peer-reviewed / published image science. It's processing reports include the source documentation. 
  • Your proposed workflow. What will guide your work? Can you document it easily? Does your processing report follow this methodology? Hint, it should. Here's my workflow for Photogrammetry, Content Analysis, and Comparative Analysis. You can find it originally in my book, Forensic Photoshop. It's what I use when I work as an analyst. It's what I teach.


Limitations / Delimitations

  • Delimitations are the bounds within which your work will be conducted. I will test the image. I won't test the device that created the image.
  • With DME, there are a ton of limitations in the data. If the tested question is, what is license plate, and a macro block analysis determines that there is no original data in the area of the license plate, then that is a limitation. If the tested question is, what is the speed of the vehicle, and you don't have access to the DVR, then that is a huge limitation. Limitations must be stated.
  • Personnel issues should also be listed. Did someone else start the work that you completed? Was another person employed on the case for a specific reason? Did something limit their involvement? If the question involves the need to measure camera height at a scene, and you can't climb a ladder so you mitigated that in some way, list it. 
A side note here ... did you reach out to someone for help? Someone like the DVR's technician or the manufacturer of your analysis tool's support staff? Did they assist you? Make sure that you list their involvement. Did you send out a copy of the evidence to someone? If yes, is it within your agency's policy to release a copy of the evidence in the way that you've done so for the case? As an example, you send a still image of a vehicle to the FVA list asking for help. You receive a ton of advice that helps you form your conclusion, or helps the investigation. Did you note in your report that you interacted with the list and who helped? Did you provide a copy of the correspondence in the report package? Did you provide all of the responses or just the ones that support your conclusion? The ones that don't support your eventual conclusion should be included, with an explanation as to why you disagree. They're potentially exculpatory, and they should be addressed.

Remember, on cross examination, attorneys rarely ask questions of people blindly. They likely already know the answer and are walking your down a very specific path to a very specific desired conclusion.  Whilst an attorney might not subpoena Verint's tech support staff / communications, as an example, they may have access to the FVA list and may be aware of your communications about the case there. You may not have listed that you received help from that source, but the opposing counsel might. You won't know who's watching what source. They may ask if you've received help on the case. How would you answer if you didn't list the help and disclose the communications, all of the communications? If your agency's policy prohibits the release of case related info, and you shared case related info on the FVA list, your answer to the question now involves specific jeopardy for your career. I've been assigned to Internal Affairs, I've been an employee rep, I know how the system works when one has been accused of misconduct. How do you avoid the jeopardy? Follow your agency's policies and keep good records of your case activity.

Processing

  • These are the steps performed and the settings used. This section should read like a recipe so that some other person with similar training / equipment can reproduce your work. This is the essence of Section 4 of ASTM 2825. Amped FIVE Processing Report can be inserted here as it conforms to ASTM 2825-12(17). 

Results / Summary

  • Did you encounter any problems or errors. List them.
  • How did you validate your results? Did anyone peer review your work? This can include test/retest or other such validity exams.
  • Conclusions - your opinion goes here. This is the result of your test / experiment / analysis.
  • List of Output File(s) / Derivatives / Demonstratives

Approval(s)

  • Examiner (your name here), along with anyone else who's work is included in the report.
  • Reviewer(s) - was your completed work reviewed? Their name(s).
  • Administrative Approval - did a supervisor approve of the completed exam?
Do your reports look like this? Does the opposing counsel analyst's report look like this? If not, why not? It may be an avenue to explore on cross examination. It's best to be prepared.


I know that this is a rather long post. But, I wanted to be rather comprehensive in presenting the topic and list the sources for the information listed. Hopefully, this proves helpful.

Enjoy.