Before one begins any sort of research, one usually surveys the literature on the topic to see if any research has been completed and what, if anything, was concluded. Sure, the researcher has a general idea about what they want to study, but a literature review helps to inform and refine the eventual design of the study. According to Shields and Rangarajan (2013), there's a difference between the process of reviewing the literature and a finished work or product known as a literature review. The process of reviewing the literature is often ongoing and informs many aspects of the empirical research project. See what I just did, I discovered some research on literature reviews, and inserted the summary into my paragraph. Usually, there's an accompanying citation. Here it is: Shields, P., Rangarjan, N. (2013). A Playbook for Research Methods: Integrating Conceptual Frameworks and Project Management. Stillwater, Oklahoma: New Forums Press. ISBN 1-58107-247-3.
I received some feedback about the few posts I've written regarding "headlight spread pattern analysis." One was very intriguing - "... assume the premise is true, that there is uniqueness that can be discovered through experimentation. Where would you begin? What would the experimental design look like?" Hmmm....
Given the term, "headlight spread pattern analysis," there are four distinct elements - "headlamps," "the diffusion of light," "pattern matching," and a methodology for "analysis." Each of these would need to handled separately before adding the next element - the recording of this diffusion of light within a scene.
Let's just do a bit of research on the types of headlamps available to the general commercial market, leaving the other three elements for later.
Our first discovery is that we must separate the "lamp" from the "bulb." The bulb provides the "light" and the "lamp" is a system for the projection of that light.
For the lamp's housing, there are two general types: "reflector" and "projector." Of the light sources ("bulb"), there are several available types: Tungsten, Tungsten-halogen, High-intensity discharge (HID), LED, Laser. Of the "filament" type lamps, there are over 35 types available in for sale in the US, and covered by the World Forum for Harmonization of Vehicle Regulations (ECE Regulations), which develops and maintains international-consensus regulations on light sources acceptable for use in lamps on vehicles and trailers type-approved for use in countries that recognise the UN Regulations.
Given the eventual experimental design, it's important to note that the US and Canada "self-certify" compliance with the ECE Regulations. No prior verification is required by a governmental agency or authorised testing entity before the equipment can be imported, sold, installed, or used.
For a bulb's operation, there are variables to consider. There's voltage (usually 12V) and wattage (between 20w - 75w) - collectively known as "Nominal Power." Then there's "luminous flux." In photometry, luminous flux or luminous power is the measure of the perceived power of light. It differs from radiant flux, the measure of the total power of electromagnetic radiation (including infrared, ultraviolet, and visible light), in that luminous flux is adjusted to reflect the varying sensitivity of the human eye to different wavelengths of light.
Lots of big words there. But two stand out - luminous flux (the measure of the perceived power of light) and radiant flux (the measure of the total power of electromagnetic radiation). For our experiment, we'll need to differentiate between these as someone / some people are going to compare patterns (perception is subjective). We'll also need an objective measure of the total power of our samples. Luminous flux is used in the standard as the point of headlamps is to improve the drivers perception of the scene in front of them as they drive.
Luminous flux is measured in lumens. On our list of bulbs, the luminous flux values are reported as being between 800lm and 1750lm with a tolerance of between +/-10% and +/15%. This makes the range between 680lm and 2012.5lm. It's important to remember that the performance of a bulb over it's life span is not binary (e.g. 1550lm constantly until is stops working). Performance of lamps degrade over time.
Back to the lamp as a system. There are the general types of fixed lamps - they're bolted on to the front of the vehicle by at least four fasteners. These types need to be "aimed" at the time of installation, which can shift over time as the fasteners loosen. There are also "automatic" lamps, which feature some form of "beam aim control." These "beam aim control" types include, headlamp leveling systems, directional headlamps, advanced front-lighting system (AFS), automatic beam switching, Intelligent Light System, adaptive highbeam, and glare-free high beam and pixel light.
Now in the cases that I've reviewed, it seems that "headlight spread pattern analysis" was employed when a proper vehicle make / model determination failed due to a lack of available detail - usually due to a low nominal resolution.
Given what I've just shared above, about the potential variables in our study, an important revelation emerges. If there is insufficient nominal resolution to conduct a vehicle make / model determination, which considers class characteristics like presence / quantity of features like doors and windows before considering the presence / quantity / type of features within those items, then how could there be a determination as to type of lamp system and bulb that would be necessary for any comparison of headlight spread pattern? What if there's a general match of the shape of the pattern, but the quality of light is wrong? Or, what if the recorder's recording process and compression scheme corrupt the shape of the light dispersal or change the quality of the light? How then is a "scientific" comparison possible?
Short answer - it's not. This is one of the ways in which "forensics" (rhetoric) is used to mask a lack of science in "forensic science."
But, let's take a look at that question in a different way. Given all of the variables listed above, what would a normal distribution of "headlight spread patterns" "look like" (observed without recoding) for each of the possible combinations of system, bulb, and mounted position? What would they look like after being recorded on a DVR? This adds more variables to the equation.
For the recording system, there's the camera / lens combination, there's the transmission method, and there's the recorder's frame rate and compression scheme to contend with. Sure, you have the evidence item. But you don't know if the system was operating "normally" during that recording, or what "normal" even is until you produce a performance model of the system's operation in the recording of everything listed above. You'll need the "ground truth" of the recorder's capabilities in order to perform a proper experiment.
Remember, the recording may be "known" - meaning you retrieved it from the system and controlled it's custody such that the integrity of the file is not in question. But, what is unknown is the make / model of the vehicle. THIS CAN'T BE PRESUPPOSED. IT MUST BE DETERMINED.
In the cases that I reviewed, each comparison was performed against a presupposed make / model of vehicle - the "suspect's vehicle." If convenience samples were employed for a "comparison," then it was a few handy cars of the same make / model / year as the accused's vehicle. THIS IS NOT A VEHICLE DETERMINATION. This is no different than a single-person line-up, or a "show-up." This method has no relationship with science.
Back to the literature review and how it may inform a future experimental design.
What I've discovered is that the quantity of variables is quite large. Actually, the quantity of system types, then the variables within those systems, is quite large. This is before considering how these will be recorded by a given recorder. This information would be required to validate case work, aka a CASE STUDY. A case study is only applicable to that one case. If one wanted to validate the technique, then an appropriate amount of recorders would need to be included (proper samples of complete system types).
Given all of this, the cost of a single case study would be beyond the budget of most investigative agencies. It's certainly beyond my budget. The cost of testing the questions, "headlight spread pattern analysis has no validity" (H null) and "headlight spread pattern analysis has validity" (H1) would be massive.
Nevertheless, given all of the above, to conclude "match" - it is "the accused's vehicle" - one must rule out all other potential vehicles. Given that estimates put the number of cars and trucks in the United States at between 250-260 million vehicles for a country with 318 million people, then "match" says "to the exclusion of between 250-260 million vehicles" - which doesn't include the random Canadian or Mexican who drove their car / truck across the border to go shopping at Target. Because of this, "analysts" usually equivocate and use terms like "consistent with" or "can't include / exclude." Which, again, is rhetoric - not science.
This blog is no longer active and is maintained for archival purposes. It served as a resource and platform for sharing insights into forensic multimedia and digital forensics. Whilst the content remains accessible for historical reference, please note that methods, tools, and perspectives may have evolved since publication. For my current thoughts, writings, and projects, visit AutSide.Substack.com. Thank you for visiting and exploring this archive.
Featured Post
Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...
Showing posts with label science. Show all posts
Showing posts with label science. Show all posts
Wednesday, August 21, 2019
Sunday, August 18, 2019
First, do no harm
In an interesting article over at The Guardian, Hannah Fry, an associate professor in the mathematics of cities at University College London, noted that mathematicians, computer engineers, and scientists in related fields should take a Hippocratic oath to protect the public from powerful new technologies under development in laboratories and tech firms. She went on to say that "the ethical pledge would commit scientists to think deeply about the possible applications of their work and compel them to pursue only those that, at the least, do no harm to society."
I couldn't agree more. I would add forensic analysts to the list of people who should take that oath.
I look at the state of the digital / multimedia analysis industry and see places where this "do no harm" pledge would re-orient the relationship that practitioners have with science.
Yes, as someone who swore an oath to protect and defend the Constitution of the United States (as well as the State of California), and as someone who had Bill Bratton's "Constitutional Policing" beaten into him (not literally, people), I understand fully the relationship between the State and the Citizen. In the justice system, it is for the prosecution to offer evidence (proof) of their assertions. This simple premise - innocent until proven guilty - separates the US from many 'first world" countries.
I've been watching several trials around the country and noticed an alarming trend - junk procedures. Yes, junk procedures and not junk science as there seems to be no science to their procedures - which serve as a pretty frame for their lofty rhetoric. This trend can be beaten back, if the sides agree to stick to the rules and do no harm.
Realizing that I've spent a majority of my career as an analyst in California, and that California is a Frye state, I'll start there in explaining how we, as an industry, can avoid junk status and reform ourselves. Let's take a look.
I couldn't agree more. I would add forensic analysts to the list of people who should take that oath.
I look at the state of the digital / multimedia analysis industry and see places where this "do no harm" pledge would re-orient the relationship that practitioners have with science.
Yes, as someone who swore an oath to protect and defend the Constitution of the United States (as well as the State of California), and as someone who had Bill Bratton's "Constitutional Policing" beaten into him (not literally, people), I understand fully the relationship between the State and the Citizen. In the justice system, it is for the prosecution to offer evidence (proof) of their assertions. This simple premise - innocent until proven guilty - separates the US from many 'first world" countries.
I've been watching several trials around the country and noticed an alarming trend - junk procedures. Yes, junk procedures and not junk science as there seems to be no science to their procedures - which serve as a pretty frame for their lofty rhetoric. This trend can be beaten back, if the sides agree to stick to the rules and do no harm.
Realizing that I've spent a majority of my career as an analyst in California, and that California is a Frye state, I'll start there in explaining how we, as an industry, can avoid junk status and reform ourselves. Let's take a look.
You might remember that prior to Daubert, Frye was the law of the land. The Frye standard is commonly referred to as the “general acceptance test” under which generally accepted scientific methods are admissible, and those that are not sufficiently established are inadmissible.
The Frye Standard comes from the case Frye v. United States, 293 F. 1013 (D.C. Cir. 1923) in which the defendant, who had been charged with second degree murder, sought to introduce testimony from the scientist who conducted a lie detector test.
The D.C. Court of Appeals weighed expert testimony regarding the reliability of lie detector test results. The court noted: Just when a scientific principle of discovery crosses the line between the experimental and demonstrable stages is difficult to define…. [W]hile courts will go a long way in admitting expert testimony deduced from a well-recognized scientific principle of discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the field in which it belongs."
The last part of that sentence is where I want to go with Frye - "in the field in which it belongs."
There is an emerging trend, highlighted in the Netflix series Exhibit A, where [ fill in the type of unrelated technician ] is venturing into digital / multimedia analysis and working cases. They're not using the generally accepted methods within the digital / multimedia analysis community. They're not following ASTM standards / guidelines. They're not following SWGDE's best practices. They're doing the work from their own point of view, using the tools and techniques common to their discipline. Often times, their discipline is not scientific at all, and thus there is no research or validation history on their methods. They're doing what they do, using the tools they know, but in a field where it doesn't belong. Their tools and techniques may be fine in their discipline - but there has been no research on their use in our discipline. Thus, before they engage in our discipline, they should validate them appropriately - in order to do no harm.
Let's look at this not from the standpoint of my opinion on the matter. Let's look at this from the five-part Daubert test.
1. Whether the theory or technique in question can be and has been tested. Has the use of [ pick the method ] been tested? Remember, a case study is not sufficient testing of a methodology according to Daubert.
2. Whether it has been subjected to peer review and publication. There are so few of us publishing papers, and so few places to publish, that this is a big problem in our industry. Combine that with the fact that most publications are behind paywalls, making research on a topic very expensive.
3. It's known or potential error rate. If there is no study, there really can't be a known error rate.
4. The existence and maintenance of standards controlling its operation. If it's a brand new trend, then there really hasn't been time for the standards bodies to catch up.
5. Whether it has attracted widespread acceptance within a relevant scientific community. The key word for me is not "community" but "scientific." There are many "communities" in this industry that aren't at all "scientific." Membership organizations in our discipline focus on rapidly sharing information amongst members, not advancing the cause of science.
So pick the emerging trend. Pick "Headlight Spread Pattern." Pick "Laser Scan Enabled Reverse Projection." Jump into any research portal - EBSCO, ProQuest, or even Google Scholar. Type in the method being offered. See the results ...
The problem expands when someone finds an article, like the one I critiqued here, that seemingly supports what they want to do, whilst ignoring the article's limitations section or the other articles that may refute the assertions. This speaks to the need for a "research methods" requirement in analysts' certification programs.
If you're venturing into novel space, did you validate your tool set? Do you know how? Would you like training? We can help. But, remember that people's lives, liberty, and property are at stake (and they're innocent until proven guilty), can we at least agree to begin our inquiries from the standpoint of "first do no harm?"
So pick the emerging trend. Pick "Headlight Spread Pattern." Pick "Laser Scan Enabled Reverse Projection." Jump into any research portal - EBSCO, ProQuest, or even Google Scholar. Type in the method being offered. See the results ...
The problem expands when someone finds an article, like the one I critiqued here, that seemingly supports what they want to do, whilst ignoring the article's limitations section or the other articles that may refute the assertions. This speaks to the need for a "research methods" requirement in analysts' certification programs.
If you're venturing into novel space, did you validate your tool set? Do you know how? Would you like training? We can help. But, remember that people's lives, liberty, and property are at stake (and they're innocent until proven guilty), can we at least agree to begin our inquiries from the standpoint of "first do no harm?"
Tuesday, August 13, 2019
Four Kinds of Science
As an non-verbal autistic person, "language" has always been an issue for me. If you've seen me teaching / talking during a class or a workshop, this "version" of me is akin to Jim 4.0. I haven't always been extemporaneously vocal. For me, this skill was added in my late 20's and early 30's.
Because of my "issues" with language, I've chosen to participate in the standards setting bodies and assist in the creation of clearly worded documents. Words mean something. Some words mean multiple things - yes English is a crazy language. I've tried to suggest words with single meanings, eliminating uncertainty and ambiguity in our documents.
To summarize der Kiureghian and Ditlevsen (2009), although there is no unanimously approved interpretation of the concept of uncertainty, in a computational or a real-world situation, uncertainty can be described as a state of having incomplete, imperfect and/or inconsistent knowledge about an event, process or system. The type of uncertainty in our documents that is of particular concern to me is epistemic uncertainty. The word ‘epistemic’ comes from the Greek ‘επιστηµη’ (episteme), which means 'knowledge.' Epistemic uncertainty is the uncertainty that is presumed as being caused by lack of knowledge or data. Also known as systematic uncertainty, epistemic uncertainty is due to things one could know in principle but doesn't in practice. This may be because a measurement is not accurate, or because the model neglects certain effects, or because particular data has been deliberately hidden.
With epistemic uncertainty in mind, I want to revisit the concept of "science" - as in "forensic science." The problem for me, for my autistic brain's processing of people's use of the term "forensic science" is that I believe that by their practice (in their work) they're emphasizing the "forensic" part (as in forum - debate, discussion, rhetoric) and de-emphasizing the "science" part. Indeed, do we even know what the word "science" means in this context?
According to Mayper and Pula, in their workshops on the epistemology of science as a human issue, there are four kinds of science:
I think a few of the popular practices in the "forensic sciences," like "headlight spread pattern analysis" and the merging of laser scans and CCTV images to measure items present in the CCTV footage that aren't present in the laser scan currently qualify as pseudoscience. Why? They haven't been validated. Questions about validation often are sidestepped and users focus on specific legal cases where the technique was employed and subsequently allowed in as demonstrative evidence. The way in which these two techniques are employed are inconsistent with accepted science - math, stats, logic ...
Indeed, to qualify as accepted science, these new techniques must "account for not only the data that the old theory doesn’t, but also all the old data that the old theory does." But, in doing so, must it also account for the existing "rules.?" For example, with Single Image Photogrammetry, the error potential (range) increases as the distance from the camera to the subject / object increases. The farther away the thing / person is from the camera, the greater the error or range of values (e.g. the subject is between 5'10" - 6'2"). Also, Single Image Photogrammetry needs a reference point in the same orientation as the subject / object. Additionally, it needs that reference to be close to the thing measured. As distance from the reference increases, the error potential (range) increases.
Thus, for "headlight spread pattern" analysis, what is the nominal resolution of the "pattern?" If the vehicle is in motion, how is motion blur mitigated? Given all of the variables involved and the nominal resolution within the target area (which is also variable due to the perspective effect), the "pattern" would rightly become a "range of values." If it's a "range of values," how can results derived from convenience samples be declared to "match" or be "consistent with" some observed phenomenon? Analysts are employing this technique in their work, but no validation exists - no studies, no peer-reviewed published papers, nothing. Wouldn't part of validating the method mean using the old rules - analytical trigonometry - to check one's work.
The same situation exists for the demonstrative aid produced by the mixed-methods approach of blending CCTV stills or videos (a capture of then) with 3D point clouds (a capture of now). The new approach must account for all the old data. To date, the few limited studies in this area have used ideal situations and convenience samples. None have used an appropriate sample of crappy, low priced DVRs in their studies. For example, Meline and Bruehs used a single DVR and actually tested the performance of their colleagues (link) not the theory that the measurement technique is valid. In their paper, they reference a study that utilized a tripod-mounted camera deployed in an apartment's living room to "test" the author's theory about accurate measurement. The author employed a convenience sample of about 10 friends and the distance from the camera to the subjects was about 10' - it was done in a living room in someone's apartment ffs.
I don't want to make the claim that the purveyors of these techniques are engaged in "fake science." I tend to think well of others. I think perhaps they're engaged in "mistaken science," unintentionally deceiving themselves and others.
We can reform the situation. We can insist that our agencies and employers make room their budgets for research and validation studies. We must publish the results - positive or negative. We must test our assertions. In essence, we must insist that the work that we perform is actually "science" and not "forensics" (rhetoric).
If you'd like to join me in this effort, I'd love to build a community of advocates for "accepted science." Feel free to contact me for more information.
Because of my "issues" with language, I've chosen to participate in the standards setting bodies and assist in the creation of clearly worded documents. Words mean something. Some words mean multiple things - yes English is a crazy language. I've tried to suggest words with single meanings, eliminating uncertainty and ambiguity in our documents.
To summarize der Kiureghian and Ditlevsen (2009), although there is no unanimously approved interpretation of the concept of uncertainty, in a computational or a real-world situation, uncertainty can be described as a state of having incomplete, imperfect and/or inconsistent knowledge about an event, process or system. The type of uncertainty in our documents that is of particular concern to me is epistemic uncertainty. The word ‘epistemic’ comes from the Greek ‘επιστηµη’ (episteme), which means 'knowledge.' Epistemic uncertainty is the uncertainty that is presumed as being caused by lack of knowledge or data. Also known as systematic uncertainty, epistemic uncertainty is due to things one could know in principle but doesn't in practice. This may be because a measurement is not accurate, or because the model neglects certain effects, or because particular data has been deliberately hidden.
With epistemic uncertainty in mind, I want to revisit the concept of "science" - as in "forensic science." The problem for me, for my autistic brain's processing of people's use of the term "forensic science" is that I believe that by their practice (in their work) they're emphasizing the "forensic" part (as in forum - debate, discussion, rhetoric) and de-emphasizing the "science" part. Indeed, do we even know what the word "science" means in this context?
According to Mayper and Pula, in their workshops on the epistemology of science as a human issue, there are four kinds of science:
- Accepted Science - theories that are not yet refuted, after rigorous tests. Counter-examples must be accounted for or shown to be in error. Theories “tentative for ever”, but not discarded frivolously. Good replacements are not easily come by. A new theory must account for not only the data that the old theory doesn’t, but also all the old data that the old theory does.
- Erroneous Science - theories that are not yet refuted, but are tested by false data:
- Fake Science — scientist intentionally deceives others
- Mistaken Science — scientist unintentionally deceives self (and others)
- Pseudoscience - theories inconsistent with accepted science, attempts to refute them avoided or ignored, e.g. astrology, numerology, biorhythms; dowsing,
- Fringe Science - theories inconsistent with accepted science, not yet refuted, but attempts to do so invited, e.g. Unified field theories (data accumulate faster than theory construction), Rupert Sheldrake’s “morphogenetic fields”, Schmidt’s ESP findings, etc.
I think a few of the popular practices in the "forensic sciences," like "headlight spread pattern analysis" and the merging of laser scans and CCTV images to measure items present in the CCTV footage that aren't present in the laser scan currently qualify as pseudoscience. Why? They haven't been validated. Questions about validation often are sidestepped and users focus on specific legal cases where the technique was employed and subsequently allowed in as demonstrative evidence. The way in which these two techniques are employed are inconsistent with accepted science - math, stats, logic ...
Indeed, to qualify as accepted science, these new techniques must "account for not only the data that the old theory doesn’t, but also all the old data that the old theory does." But, in doing so, must it also account for the existing "rules.?" For example, with Single Image Photogrammetry, the error potential (range) increases as the distance from the camera to the subject / object increases. The farther away the thing / person is from the camera, the greater the error or range of values (e.g. the subject is between 5'10" - 6'2"). Also, Single Image Photogrammetry needs a reference point in the same orientation as the subject / object. Additionally, it needs that reference to be close to the thing measured. As distance from the reference increases, the error potential (range) increases.
Thus, for "headlight spread pattern" analysis, what is the nominal resolution of the "pattern?" If the vehicle is in motion, how is motion blur mitigated? Given all of the variables involved and the nominal resolution within the target area (which is also variable due to the perspective effect), the "pattern" would rightly become a "range of values." If it's a "range of values," how can results derived from convenience samples be declared to "match" or be "consistent with" some observed phenomenon? Analysts are employing this technique in their work, but no validation exists - no studies, no peer-reviewed published papers, nothing. Wouldn't part of validating the method mean using the old rules - analytical trigonometry - to check one's work.
The same situation exists for the demonstrative aid produced by the mixed-methods approach of blending CCTV stills or videos (a capture of then) with 3D point clouds (a capture of now). The new approach must account for all the old data. To date, the few limited studies in this area have used ideal situations and convenience samples. None have used an appropriate sample of crappy, low priced DVRs in their studies. For example, Meline and Bruehs used a single DVR and actually tested the performance of their colleagues (link) not the theory that the measurement technique is valid. In their paper, they reference a study that utilized a tripod-mounted camera deployed in an apartment's living room to "test" the author's theory about accurate measurement. The author employed a convenience sample of about 10 friends and the distance from the camera to the subjects was about 10' - it was done in a living room in someone's apartment ffs.
I don't want to make the claim that the purveyors of these techniques are engaged in "fake science." I tend to think well of others. I think perhaps they're engaged in "mistaken science," unintentionally deceiving themselves and others.
We can reform the situation. We can insist that our agencies and employers make room their budgets for research and validation studies. We must publish the results - positive or negative. We must test our assertions. In essence, we must insist that the work that we perform is actually "science" and not "forensics" (rhetoric).
If you'd like to join me in this effort, I'd love to build a community of advocates for "accepted science." Feel free to contact me for more information.
Wednesday, July 31, 2019
The four dimensions and image / video analysis
When someone mentions "different dimensions," we tend to think of things like parallel universes – alternate realities that exist parallel to our own, but where things work or happened differently. However, there is no FRINGE Division defending us against intruders from parallel worlds. The reality of dimensions and how they play a role in the ordering of our Universe is really quite different from this popular characterization. Understanding how they work is foundational to our work as digital / multimedia analysts.
To break it down, dimensions are simply the different facets of what we perceive to be reality (source). We are immediately aware of the three dimensions that surround us on a daily basis – those that define the length, width, and depth of all objects in our universes (the x, y, and z axes, respectively).
The first dimension, as already noted, is that which gives it length (aka. the x-axis). A good description of a one-dimensional object is a straight line, which exists only in terms of length and has no other discernible qualities.
Add to it a second dimension, the y-axis (or height), and you get an object that becomes a 2-dimensional shape (like a square).
The third dimension involves depth (the z-axis), and gives all objects a sense of area and a cross-section. The perfect example of this is a cube, which exists in three dimensions and has a length, width, depth, and hence volume.
Scientists believe that the fourth dimension is time, which governs the properties of all known matter at any given point. Along with the three other dimensions, knowing an object's position in time is essential to plotting its position in the universe.
Video and images fit well within this concept. But, it's important to understand what one is looking at when one analyzes multimedia files.
You see, multimedia takes the 4-dimensional world and records it in a 2-dimensional medium. Images and video are flat - X and Y only. There is an element of time as well. But, the third dimension is treated differently. The third dimension gets skewed a bit, causing a perspective effect.
Perspective, in this case, is an approximate representation of an image as it is seen by the eye and processed by the brain. The two most characteristic features of perspective are that objects appear smaller as their distance from the observer increases; and that they are subject to foreshortening, meaning that an object's dimensions along the line of sight appear shorter than its dimensions across the line of sight.
Because of this effect, it's important to master a few concepts as well as to have a valid toolset when working in this space.
Conceptually, Nominal Resolution is the numerical value of pixels per inch as opposed to the achievable resolution of the imaging device. In the case of digital cameras, this refers to the number of pixels of the camera sensor divided by the corresponding vertical and horizontal dimension of the area photographed. (SWGDE Digital & Multimedia Evidence Glossary, Version 3.0) In video and image, the farther away from the camera one gets, or the deeper into the scene one gets, the lower the nominal resolution becomes. At a certain point, nominal resolution moves from pixels per unit of measure to unit of measure per pixel (e.g. 2cm / px vs 2px / cm).
The problem with validity in measurements in this discipline is that the majority of freeware, even Photoshop, treats every pixel the same. Basic planar geometry says that a pixel equals a real world measure no matter where in the image you measure. But, with depth / perspective, we know this can't be the case. Thus, we need a valid toolset for Single Image Photogrammetry.
Single Image Photogrammetry uses elements within the image itself to estimate the measure of unknown objects / subjects (e.g. a doorway's known height informs the measure of a person who walks by / through). Single Image Photogrammetry is my preferred method of photogrammetry as it employs only the evidence item and does not require the creation of additional files.
What do I mean by this - creation of additional files?
When utilizing reverse projection, for example, you must create a brand new recording. Assuming that you use the same recorder and camera/lens that was used in the creation of the evidence file (and that their settings remain unchanged from the time of the original recording), this new piece of evidence can be associated with the evidence file with a simple overlay. Similarity metrics can be employed to verify that the camera/lens position/settings haven't changed.
BUT... you must understand what reverse projection is from an evidentiary standpoint. You're creating a demonstrative exhibit when you engage in reverse projection (you must adequately explain of what your exhibit is demonstrative). You are creating a piece of evidence to demonstrate a single theory of the case. Thus, multiple reverse projection exhibits would be required in order to satisfy Daubert's requirement to account for multiple theories. It's also important to know that reverse projection alone is not measurement. It's an overlay. Because of compression and other errors, there will be a range of values possible for your eventual measure - not a single value. Reverse projection can assist in a follow-up measurement, such as Single View Metrology. Thus, the demonstrative (reverse projection) combined with a valid measure become reverse projection photogrammetry.
With the three dimensions properly accounted for, it's time to address the fourth dimension (pun intended). As noted in previous posts, time information extracted from multimedia containers is not "ground truth." It can't be assumed to be accurate. These devices are nothing more than a $7 box of parts. Thus, in order to attempt to link the timing information in the container to previous events, a valid experiment must be run.
All of this is to say, measuring objects / subjects within in evidence footage is complex and requires a trained / experienced analyst employing a valid toolset. If you'd like to continue this discussion and move beyond simple vendor training (which buttons do what) and into the world of science and experimentation, we'd love to see you in one of our classes.
To break it down, dimensions are simply the different facets of what we perceive to be reality (source). We are immediately aware of the three dimensions that surround us on a daily basis – those that define the length, width, and depth of all objects in our universes (the x, y, and z axes, respectively).
The first dimension, as already noted, is that which gives it length (aka. the x-axis). A good description of a one-dimensional object is a straight line, which exists only in terms of length and has no other discernible qualities.
Add to it a second dimension, the y-axis (or height), and you get an object that becomes a 2-dimensional shape (like a square).
The third dimension involves depth (the z-axis), and gives all objects a sense of area and a cross-section. The perfect example of this is a cube, which exists in three dimensions and has a length, width, depth, and hence volume.
Scientists believe that the fourth dimension is time, which governs the properties of all known matter at any given point. Along with the three other dimensions, knowing an object's position in time is essential to plotting its position in the universe.
Video and images fit well within this concept. But, it's important to understand what one is looking at when one analyzes multimedia files.
You see, multimedia takes the 4-dimensional world and records it in a 2-dimensional medium. Images and video are flat - X and Y only. There is an element of time as well. But, the third dimension is treated differently. The third dimension gets skewed a bit, causing a perspective effect.
Perspective, in this case, is an approximate representation of an image as it is seen by the eye and processed by the brain. The two most characteristic features of perspective are that objects appear smaller as their distance from the observer increases; and that they are subject to foreshortening, meaning that an object's dimensions along the line of sight appear shorter than its dimensions across the line of sight.
Because of this effect, it's important to master a few concepts as well as to have a valid toolset when working in this space.
Conceptually, Nominal Resolution is the numerical value of pixels per inch as opposed to the achievable resolution of the imaging device. In the case of digital cameras, this refers to the number of pixels of the camera sensor divided by the corresponding vertical and horizontal dimension of the area photographed. (SWGDE Digital & Multimedia Evidence Glossary, Version 3.0) In video and image, the farther away from the camera one gets, or the deeper into the scene one gets, the lower the nominal resolution becomes. At a certain point, nominal resolution moves from pixels per unit of measure to unit of measure per pixel (e.g. 2cm / px vs 2px / cm).
The problem with validity in measurements in this discipline is that the majority of freeware, even Photoshop, treats every pixel the same. Basic planar geometry says that a pixel equals a real world measure no matter where in the image you measure. But, with depth / perspective, we know this can't be the case. Thus, we need a valid toolset for Single Image Photogrammetry.
Single Image Photogrammetry uses elements within the image itself to estimate the measure of unknown objects / subjects (e.g. a doorway's known height informs the measure of a person who walks by / through). Single Image Photogrammetry is my preferred method of photogrammetry as it employs only the evidence item and does not require the creation of additional files.
What do I mean by this - creation of additional files?
When utilizing reverse projection, for example, you must create a brand new recording. Assuming that you use the same recorder and camera/lens that was used in the creation of the evidence file (and that their settings remain unchanged from the time of the original recording), this new piece of evidence can be associated with the evidence file with a simple overlay. Similarity metrics can be employed to verify that the camera/lens position/settings haven't changed.
BUT... you must understand what reverse projection is from an evidentiary standpoint. You're creating a demonstrative exhibit when you engage in reverse projection (you must adequately explain of what your exhibit is demonstrative). You are creating a piece of evidence to demonstrate a single theory of the case. Thus, multiple reverse projection exhibits would be required in order to satisfy Daubert's requirement to account for multiple theories. It's also important to know that reverse projection alone is not measurement. It's an overlay. Because of compression and other errors, there will be a range of values possible for your eventual measure - not a single value. Reverse projection can assist in a follow-up measurement, such as Single View Metrology. Thus, the demonstrative (reverse projection) combined with a valid measure become reverse projection photogrammetry.
With the three dimensions properly accounted for, it's time to address the fourth dimension (pun intended). As noted in previous posts, time information extracted from multimedia containers is not "ground truth." It can't be assumed to be accurate. These devices are nothing more than a $7 box of parts. Thus, in order to attempt to link the timing information in the container to previous events, a valid experiment must be run.
All of this is to say, measuring objects / subjects within in evidence footage is complex and requires a trained / experienced analyst employing a valid toolset. If you'd like to continue this discussion and move beyond simple vendor training (which buttons do what) and into the world of science and experimentation, we'd love to see you in one of our classes.
Friday, July 19, 2019
What is forensic science
As I prepare to head out to Orlando for next week's OSAC in-person meeting, I want to revisit one of the papers that the OSAC has issued since it's founding.
Consider that the OSAC is a group that includes all forensic science disciplines. Thus, in harmonizing the language used to describe what should be a simple term - forensic science - much work was done to arrive at a definition that works for all forensic science disciplines.
I've shared the highlights in several posts and papers. Here's the full discussion.
---
2. Forensic Science
A definition of forensic science should focus on the evidence scrutinized and the questions answered by the inquiry. After extensive research, surveys, and discussions, the TG formed the following understanding of the aim and purpose of forensic science:
Traces are the fundamental objects of study in forensic science. A trace is a vestige, left from a past event or activity, criminal or not. The principle that every contact leaves a trace was initially attributed to Edmond Locard, and has evolved into a new definition of the trace to include a lacuna in available evidence, as well as activities in virtual settings (Jaquet-Chiffelle, 2013):
A trace is any modification, subsequently observable, resulting from an event.
This is not to suggest that all forensic questions involve event reconstruction, merely that all traces involve some modification. Even immutable objects can be a trace when their occurrence in relation to a forensic inquiry is the consequence of an event (e.g., a mobile device identifier deposited at a crime scene, or DNA transferred onto a victim). The modification can affect an entity in an environment or the environment itself. Its nature can be physical or virtual, material or immaterial, analog or digital. It can reveal itself as a presence or as an absence.
Forensic science addresses questions, potentially across all forensic disciplines. These questions are addressed using a specific and finite number of core forensic processes. For the purpose of this document, these processes are labeled as: 1) authentication, 2) identification,
3) classification, 4) reconstruction, and 5) evaluation.
The following definition of forensic science emerged from this work:
The systematic and coherent study of traces to address questions of authentication, identification, classification, reconstruction, and evaluation for a legal context.
The term systematic in this definition encompasses empirically supported research, controlled experiments, and repeatable procedures applied to traces. The term coherent entails logical reasoning and methodology. This definition uses legal context in the broadest terms, including the typical criminal, civil, and regulatory functions of the legal system, as well as its extensions such as human rights, employment, natural disasters, security matters.
---
Continuing on ...
---
3. Digital/Multimedia Evidence
To understand the scientific foundations of digital/multimedia evidence and how this fits into forensic science, it is necessary to consider the specializations of digital/multimedia evidence. Digital/multimedia evidence encompasses the following sub-disciplines (ed. note: edited for brevity), which are organized according to the current OSAC structure:
Video/image technology and analysis: handling images and videos for forensic purposes. This includes classification and identification of items, such as comparing an item in an image or video with a known item (e.g., car, jacket). This also includes authentication of images and videos, metadata analysis, Photo Response Non-Uniformity (PRNU) analysis, image quality assessment, and detection of manipulation. Operational techniques include image and video enhancement and restoration.
Digital evidence: handling digital traces for forensic purposes, including classification and identification of items, activity reconstruction, detection of manipulation (e.g., authentication of digital document, concealment of evidence). Within the current OSAC structure, audio recordings are treated as a form of digital evidence for enhancement and authentication purposes.
The foundational sciences for the various sub-disciplines of digital/multimedia evidence are primarily biology, physics, and mathematics, but also include: computer science, computer engineering, image science, video and television engineering, acoustics, linguistics, anthropology, statistics, and data science. Principles of these, and other disciplines, are applied to the traces, data, and systems examined by forensic scientists. Study of foundational principles in digital/multimedia evidence is ongoing, with consideration for their suitability in forensic science applications.
Furthermore, many digital traces are changes to the state of a computer system resulting from user actions. In this context, the discovery of principles in how computer systems function, is a fundamental scientific aspect of digital/multimedia evidence. The systematic and coherent study of digital/multimedia evidence is made more complicated by the evolving nature of technology and its use. While the foundations of digital/multimedia evidence are largely in computer science, computer engineering, image science, video and television engineering, and data science, the underlying digital traces are, in large part, created by actions of operating systems, programs, and hardware that are under constant development. As a result, it will not always be possible to test in advance the performance of such systems under every possible combination of variables that may arise in casework. However, it may be possible, to test the performance of a particular system under a particular set of variables in order to address questions arising in a specific case. For instance, digital documents created using a new version of word processing software can exhibit digital traces that were not previously known. The observed traces can be understood by conducting experiments; studying the software under controlled conditions. In this manner, generalized knowledge of digital/multimedia evidence is established and can be used by any forensic scientists to obtain reproducible, widely accepted results.
---
It's this last paragraph that I'll finish with. Notice these statements:
Consider that the OSAC is a group that includes all forensic science disciplines. Thus, in harmonizing the language used to describe what should be a simple term - forensic science - much work was done to arrive at a definition that works for all forensic science disciplines.
I've shared the highlights in several posts and papers. Here's the full discussion.
---
2. Forensic Science
A definition of forensic science should focus on the evidence scrutinized and the questions answered by the inquiry. After extensive research, surveys, and discussions, the TG formed the following understanding of the aim and purpose of forensic science:
Traces are the fundamental objects of study in forensic science. A trace is a vestige, left from a past event or activity, criminal or not. The principle that every contact leaves a trace was initially attributed to Edmond Locard, and has evolved into a new definition of the trace to include a lacuna in available evidence, as well as activities in virtual settings (Jaquet-Chiffelle, 2013):
A trace is any modification, subsequently observable, resulting from an event.
This is not to suggest that all forensic questions involve event reconstruction, merely that all traces involve some modification. Even immutable objects can be a trace when their occurrence in relation to a forensic inquiry is the consequence of an event (e.g., a mobile device identifier deposited at a crime scene, or DNA transferred onto a victim). The modification can affect an entity in an environment or the environment itself. Its nature can be physical or virtual, material or immaterial, analog or digital. It can reveal itself as a presence or as an absence.
Forensic science addresses questions, potentially across all forensic disciplines. These questions are addressed using a specific and finite number of core forensic processes. For the purpose of this document, these processes are labeled as: 1) authentication, 2) identification,
3) classification, 4) reconstruction, and 5) evaluation.
The following definition of forensic science emerged from this work:
The systematic and coherent study of traces to address questions of authentication, identification, classification, reconstruction, and evaluation for a legal context.
The term systematic in this definition encompasses empirically supported research, controlled experiments, and repeatable procedures applied to traces. The term coherent entails logical reasoning and methodology. This definition uses legal context in the broadest terms, including the typical criminal, civil, and regulatory functions of the legal system, as well as its extensions such as human rights, employment, natural disasters, security matters.
---
Continuing on ...
---
3. Digital/Multimedia Evidence
To understand the scientific foundations of digital/multimedia evidence and how this fits into forensic science, it is necessary to consider the specializations of digital/multimedia evidence. Digital/multimedia evidence encompasses the following sub-disciplines (ed. note: edited for brevity), which are organized according to the current OSAC structure:
Video/image technology and analysis: handling images and videos for forensic purposes. This includes classification and identification of items, such as comparing an item in an image or video with a known item (e.g., car, jacket). This also includes authentication of images and videos, metadata analysis, Photo Response Non-Uniformity (PRNU) analysis, image quality assessment, and detection of manipulation. Operational techniques include image and video enhancement and restoration.
Digital evidence: handling digital traces for forensic purposes, including classification and identification of items, activity reconstruction, detection of manipulation (e.g., authentication of digital document, concealment of evidence). Within the current OSAC structure, audio recordings are treated as a form of digital evidence for enhancement and authentication purposes.
The foundational sciences for the various sub-disciplines of digital/multimedia evidence are primarily biology, physics, and mathematics, but also include: computer science, computer engineering, image science, video and television engineering, acoustics, linguistics, anthropology, statistics, and data science. Principles of these, and other disciplines, are applied to the traces, data, and systems examined by forensic scientists. Study of foundational principles in digital/multimedia evidence is ongoing, with consideration for their suitability in forensic science applications.
Furthermore, many digital traces are changes to the state of a computer system resulting from user actions. In this context, the discovery of principles in how computer systems function, is a fundamental scientific aspect of digital/multimedia evidence. The systematic and coherent study of digital/multimedia evidence is made more complicated by the evolving nature of technology and its use. While the foundations of digital/multimedia evidence are largely in computer science, computer engineering, image science, video and television engineering, and data science, the underlying digital traces are, in large part, created by actions of operating systems, programs, and hardware that are under constant development. As a result, it will not always be possible to test in advance the performance of such systems under every possible combination of variables that may arise in casework. However, it may be possible, to test the performance of a particular system under a particular set of variables in order to address questions arising in a specific case. For instance, digital documents created using a new version of word processing software can exhibit digital traces that were not previously known. The observed traces can be understood by conducting experiments; studying the software under controlled conditions. In this manner, generalized knowledge of digital/multimedia evidence is established and can be used by any forensic scientists to obtain reproducible, widely accepted results.
---
It's this last paragraph that I'll finish with. Notice these statements:
- "However, it may be possible, to test the performance of a particular system under a particular set of variables in order to address questions arising in a specific case."
- "The observed traces can be understood by conducting experiments; studying the software under controlled conditions. In this manner, generalized knowledge of digital/multimedia evidence is established and can be used by any forensic scientists to obtain reproducible, widely accepted results."
These statements have to do with validation and experimental design. Are you validating your tools? Are you conducting experiments, following the rules of experimental design?
If you'd like to explore these concepts, we've got classes that address most of the topics illustrated in this section of the document. Check out our calendar. If you find a date / class that works for your schedule, sign up. If you can't find a date that works, suggest one. We're here to help.
See you in Orlando.
Friday, June 28, 2019
Junk Science?
Great news from Texas has brought out a bunch of people with conflicting agendas. The great news? The George Powell case has moved to retrial after the Texas Court of Criminal Appeals vacated the conviction and ordered a new trial.
Whilst the media buzzes about the evidence used in the original trial, and throws "junk science" in the same sentence as "forensic video analysis," most in the media commit journalistic misconduct in their reporting of the case and the circumstances around the re-trial.
Take this article (link) for instance, "George Powell has maintained his innocence the entire time he's been behind bars. Now, he will have a chance to prove it."
WRONG.
In vacating the conviction, George Powell's status has returned to Innocent as the "proof" of his guilt has been vacated or struck down. Powell, under US law, is innocent until proven guilty in a court of law. As a presumed innocent man, he should be afforded the opportunity to make bail and to prepare for his defense.
When the trial begins, it's for the prosecution to offer the evidence that links Powell to the crime, if such evidence exists. It perverts the course of justice to require Powell to prove his innocence. Why? Not only is it the way our system works, innocent until proven guilty, but it's impossible to prove a negative.
Thus, when the trial begins, it's not "forensic science" that will be on trial. It's not "forensic video analysis" that will be on trial. What will be on trial is the evidence, offered up as proof of some condition. Then, it's up to the Trier of Fact to evaluate that evidence.
Forensic science and forensic video analysis are scientific, when performed scientifically by educated, trained, and proficient practitioners utilizing valid tools and an appropriate and valid methodology.
As regards Photogrammetry specifically, a measurement shouldn't be used to "identify" an object or a subject. Rare is the case that there is nominal resolution in the evidence item sufficient to measure with an output range that is so very precise. If the measurement's range of values in this case, properly employed, is between 5'6" and 5'10", as an example, then more than 65% of the population of the area will fit within that range. It's hardly an identifying characteristic. It does, however, help prioritize tips as well as to help exclude from the enquiry those subjects more than a standard deviation away from that range. This is why it's important to engage in valid methods when performing an analysis.
Whilst the media buzzes about the evidence used in the original trial, and throws "junk science" in the same sentence as "forensic video analysis," most in the media commit journalistic misconduct in their reporting of the case and the circumstances around the re-trial.
Take this article (link) for instance, "George Powell has maintained his innocence the entire time he's been behind bars. Now, he will have a chance to prove it."
WRONG.
In vacating the conviction, George Powell's status has returned to Innocent as the "proof" of his guilt has been vacated or struck down. Powell, under US law, is innocent until proven guilty in a court of law. As a presumed innocent man, he should be afforded the opportunity to make bail and to prepare for his defense.
When the trial begins, it's for the prosecution to offer the evidence that links Powell to the crime, if such evidence exists. It perverts the course of justice to require Powell to prove his innocence. Why? Not only is it the way our system works, innocent until proven guilty, but it's impossible to prove a negative.
Thus, when the trial begins, it's not "forensic science" that will be on trial. It's not "forensic video analysis" that will be on trial. What will be on trial is the evidence, offered up as proof of some condition. Then, it's up to the Trier of Fact to evaluate that evidence.
Forensic science and forensic video analysis are scientific, when performed scientifically by educated, trained, and proficient practitioners utilizing valid tools and an appropriate and valid methodology.
As regards Photogrammetry specifically, a measurement shouldn't be used to "identify" an object or a subject. Rare is the case that there is nominal resolution in the evidence item sufficient to measure with an output range that is so very precise. If the measurement's range of values in this case, properly employed, is between 5'6" and 5'10", as an example, then more than 65% of the population of the area will fit within that range. It's hardly an identifying characteristic. It does, however, help prioritize tips as well as to help exclude from the enquiry those subjects more than a standard deviation away from that range. This is why it's important to engage in valid methods when performing an analysis.
Tuesday, June 25, 2019
So you want to dive deep?
During last week's class, and at the request of several who wrote ahead to make sure that we were going to "dive deep," we dove deep into science. Here's an example of how deep the discussions often go:
In a modern chemistry textbook (link), students read the following:
"Matter comes in many forms. The fundamental building blocks of matter are atoms and molecules. These particles make up elements and compounds. An atom is the smallest unit of an element that maintains the chemical identity of that element. An element is a pure substance that cannot be broken down into simpler, stable substances and is made of one type of atom."
Sound good, so far? No, actually. There's so much missing.
For most people, this seems simple. The statements are matter-of-fact about how material objects exist. We read them and assume that this has been seen and proven and when we look at a piece of wood or a drop of water, we tell ourselves that beyond the reach of our eyes, there is actually no “wood” or “water” but simply atoms arranged in different ways to cause different substances to appear to us, like an Impressionist painting composed of dots of paint.
If we move back, however, to the early 1900s, students were taught the following in The Elements of Chemistry: Inorganic and Organic. Norton, S.A. 1884 (link):
"We know nothing of the manner in which the ultimate particles of matter are arranged together: we believe that they are arranged in accordance with certain theories which we shall now proceed to develop.
All masses of matter may be subdivided into very small particles; but it is probable that there is a limit to this subdivision, and that all bodies are made up of particles so infinitesimally small that they are inappreciable to our senses. By the terms of this theory,
A molecule is the smallest particle of matter capable of existing in the free state:
An atom is the smallest particle of matter that is capable of entering into or existing in a state of chemical combination."
Notice how carefully the old teachers distinguished facts from theories. The textbooks tells students plainly that scientists believe in theories and that teaching about atoms and molecules is a matter of faith not facts.
In this same textbook, Norton warns of errors that people are prone to when they speak of the science and its facts. We read:
"The facts of chemistry are established by experiment, and are capable of being reproduced. They find a practical application in the arts, which is altogether independent of any explanation that may be made of them. When, however, we attempt to reason upon these facts, to classify them, to interpret them, we at once begin to form theories. A theory which renders a reasonable explanation of a great number of facts is useful (1) because it enables us to group them into a system, and (2) because it often leads to new experiments and to the discovery of other facts.
We are liable to three errors: (1) we may assume that to be a fact which has no existence; or (2) we may sometimes mistake a phenomenon, so as to imagine that to be a cause which is only an effect of some unknown cause; or, finally, (3) we may become so accustomed to the language of theory as to mistake its definitions for facts."
Can you not see how different the mind of the student formed by this older textbook is than the modern textbook which commits the third error learned of? Teaching of atoms and molecules, in modern textbooks, is no longer couched in terms of theory and belief. It is presented as immutable fact. What is fact is that there is no more proof that atoms actually exist today than there was in 1900. The language of theory has simply been abandoned by modern scientists.
Swap out the word "chemistry" for "video analysis" or your specific forensic science discipline, Norton's caution still makes for great advice - from 1884.
This simple discussion, generic to science, has profound implications in the forensic sciences. I speak often of what we "know" vs. what we can "prove," of the null hypothesis, and of conducting valid experiments. Remember the NAS Report's conclusion in 2009 - forensic science sucks at science. Well, how does forensic science remedy the situation - by engaging in actual science. That all starts with understanding how science is conducted.
As relates to measurements, comparisons, vehicle determinations, and etc., we start with the basics. To get to a vehicle determination, we must first have a nominal resolution sufficient to address the task. Same with photogrammetry - we need sufficient nominal resolution of the item or object of interest so that the measure will (at least) exceed the error. We also must understand that a measurement can't be used to identify something. The measure only contributes to the classification of the item or the object.
So, if you want to dive deep - really deep - our classes are open to all who have an enquiring mind. Even our introductory courses move beyond simple button pushing to explore the depth of the discipline.
In a modern chemistry textbook (link), students read the following:
"Matter comes in many forms. The fundamental building blocks of matter are atoms and molecules. These particles make up elements and compounds. An atom is the smallest unit of an element that maintains the chemical identity of that element. An element is a pure substance that cannot be broken down into simpler, stable substances and is made of one type of atom."
Sound good, so far? No, actually. There's so much missing.
For most people, this seems simple. The statements are matter-of-fact about how material objects exist. We read them and assume that this has been seen and proven and when we look at a piece of wood or a drop of water, we tell ourselves that beyond the reach of our eyes, there is actually no “wood” or “water” but simply atoms arranged in different ways to cause different substances to appear to us, like an Impressionist painting composed of dots of paint.
If we move back, however, to the early 1900s, students were taught the following in The Elements of Chemistry: Inorganic and Organic. Norton, S.A. 1884 (link):
"We know nothing of the manner in which the ultimate particles of matter are arranged together: we believe that they are arranged in accordance with certain theories which we shall now proceed to develop.
All masses of matter may be subdivided into very small particles; but it is probable that there is a limit to this subdivision, and that all bodies are made up of particles so infinitesimally small that they are inappreciable to our senses. By the terms of this theory,
A molecule is the smallest particle of matter capable of existing in the free state:
An atom is the smallest particle of matter that is capable of entering into or existing in a state of chemical combination."
Notice how carefully the old teachers distinguished facts from theories. The textbooks tells students plainly that scientists believe in theories and that teaching about atoms and molecules is a matter of faith not facts.
In this same textbook, Norton warns of errors that people are prone to when they speak of the science and its facts. We read:
"The facts of chemistry are established by experiment, and are capable of being reproduced. They find a practical application in the arts, which is altogether independent of any explanation that may be made of them. When, however, we attempt to reason upon these facts, to classify them, to interpret them, we at once begin to form theories. A theory which renders a reasonable explanation of a great number of facts is useful (1) because it enables us to group them into a system, and (2) because it often leads to new experiments and to the discovery of other facts.
We are liable to three errors: (1) we may assume that to be a fact which has no existence; or (2) we may sometimes mistake a phenomenon, so as to imagine that to be a cause which is only an effect of some unknown cause; or, finally, (3) we may become so accustomed to the language of theory as to mistake its definitions for facts."
Can you not see how different the mind of the student formed by this older textbook is than the modern textbook which commits the third error learned of? Teaching of atoms and molecules, in modern textbooks, is no longer couched in terms of theory and belief. It is presented as immutable fact. What is fact is that there is no more proof that atoms actually exist today than there was in 1900. The language of theory has simply been abandoned by modern scientists.
Swap out the word "chemistry" for "video analysis" or your specific forensic science discipline, Norton's caution still makes for great advice - from 1884.
This simple discussion, generic to science, has profound implications in the forensic sciences. I speak often of what we "know" vs. what we can "prove," of the null hypothesis, and of conducting valid experiments. Remember the NAS Report's conclusion in 2009 - forensic science sucks at science. Well, how does forensic science remedy the situation - by engaging in actual science. That all starts with understanding how science is conducted.
As relates to measurements, comparisons, vehicle determinations, and etc., we start with the basics. To get to a vehicle determination, we must first have a nominal resolution sufficient to address the task. Same with photogrammetry - we need sufficient nominal resolution of the item or object of interest so that the measure will (at least) exceed the error. We also must understand that a measurement can't be used to identify something. The measure only contributes to the classification of the item or the object.
So, if you want to dive deep - really deep - our classes are open to all who have an enquiring mind. Even our introductory courses move beyond simple button pushing to explore the depth of the discipline.
Monday, May 27, 2019
When the data supports no conclusion, just say so
I can't quantify the amount of requests I've received over the years where investigators have asked me to resolve a few pixels or blocks into a license plate or other identifying item. If, at the Content Triage step, nominal resolution isn't sufficient to answer the question, I say so.
When the data supports no conclusion, I try to quantify why. I usually note the nominal resolution and / or the particular defect that may be getting in the way - blur, obstruction, etc. What I don't do is equivocate. Quite the opposite, I try to be very specific as to why the question can't be answered. In this way, there is no ambiguity in my conclusion(s).
Additionally, whomever is responsible for the source of the evidence will have insight as to potential improvements to the situation. For example, if the question is "what is license plate," and the camera is positioned to monitor a parking lot, then the person / company responsible for the security infrastructure can be alerted to the potential need for additional coverage in the area of interest. This could mean additional cameras, or a change in lensing, or ...
Why this topic today?
I received a link to this article in my inbox. "Expert says Merritt truck ‘cannot be excluded’ as vehicle on McStay neighborhood video." Really? "Cannot be excluded?" What does, "cannot be excluded" mean?
At issue is a 2000 Chevy work truck, similar to the one below. Chevy makes some of the most popular trucks in the US, second only to Ford. This case takes place in Southern California, home to more than 20m people from Kern to San Diego counties.
"Cannot be excluded" is not a conclusion, it's an equivocation. Search CarFax.com for used Chevy Silverado 3500 HD trucks for sale within a 100 mile radius of Fallbrook, CA (92028). I did. I found 49 trucks for sale on that site. 49 trucks that "cannot be excluded" ... AutoTrader listed 277 trucks for sale. 277 more trucks that "cannot be excluded" ...
All of this requires us to ask a question, is the goal of a comparative analysis to "exclude" or to "include?" How would you know if you've never taken a course in comparative analysis? Perhaps we can start with the SWGDE Best Practices for Photographic Comparison for All Disciplines. Version: 1.1 (July 18, 2017) (link):
Class Characteristic – A feature of an object that is common to a group of objects.
Individualizing Characteristic – A feature of an object that contributes to differentiating that object from others of its class.
5.2 Examine the photographs to determine if they are sufficient quality to complete an examination, and if the quality will have an effect on the degree to which an examination can be completed. Specific disciplines should define quality criteria, when possible, and how a failure to meet the specified quality criterion will impact results. (This may apply to a portion of the image, or the image as a whole.)
5.2.1 If the specified quality criteria are not met, determine if it is possible to obtain additional images. If the specified quality criteria are not met, and additional images cannot be obtained, this may preclude the examiner from conducting an examination, or the results of the examination may be limited.
5.3 Enhance images as necessary. Refer to ASTM Guide E2825 for Forensic Digital Image Processing.
These steps are the essence of the Content Triage step in the workflow - do I have enough nominal resolution to continue processing and reach a conclusion?
But, there is more to this process than just a comparison of a "known" and an "unknown." How does one go from "unknown" to a "known" for a comparison? How do you "know" what is "known?" First, you must attempt a Vehicle Make / Model Determination of the vehicle in the CCTV footage.
For a Vehicle Make / Model Determination, the SWGDE Vehicle Make/Model Comparison Form. Version: 1.0 (July 11, 2018) (source) is quite helpful.
How many features are shared between model years in a specific manufacturer's product line? Class Characteristics can help get you to "truck," then to "work truck" (presence of exterior cargo containers not typically present in a basic pickup truck), then to "make" based on shapes and positions of features of the items found in the Comparison form. The form can be used to document your findings.
You may get to Make, but getting to Model in low resolution images and video can be frustrating. What's the difference between a Chevy Silverado 1500, 1500LD, 2500HD, 3500HD? There are more than 10 trim variations of the 1500 series alone. What's the difference between a 2500HD and a 3500HD?
After you've documented your process of going from "object" to "work truck" to a specific model of work truck, how do you move beyond class, to make, to model, to year, to a specific truck? Remember, an Individualizing Characteristic is a feature of an object that contributes to differentiating that object from others of its class. Before you say "headlight spread pattern," please know that there is no valid research supporting "headlight spread pattern" as an individualizing characteristic - NONE. I know that there are cases where this technique has been used, but rhetoric is not science. Many jurisdictions, such as California and Georgia, will allow just about everything in at trial, so not having one's testimony excluded at trial is not proof of anything scientific.
Taking your CCTV footage, you've made your make / model / year determination using the SWGDE's form. Now, how do you move to an individual truck?
This is where basic statistics and inferential reasoning are quite necessary. Do you have sufficient nominal resolution to pick out identifying characteristics in the footage? If not, you're done. The data supports no conclusion as to individualization.
But assuming that you do, how do you work scientifically and as bias free as possible? Unpack the biasing information that you received from your "client" and design an experiment. In the US, given our Constitutional provisions that the accused are innocent until proven guilty, it is for the prosecution to prove guilt. Thus, the staring point for your experiment is that the truck in question is not a match. With sufficient nominal resolution, you set about to prove that there is a match. If you can't, there is no match as far as you're concerned. Remember, the comparative analysis should not be influenced by any other factors or items of evidence.
In designing the experiment, you'll need a sample set of images. You see, a simple "match / no match" comparison needs an adequate sample. It perverts the course of justice to simply attempt the comparison on the accused's vehicle. We don't do witness ID line-ups with just the suspect. Neither should anyone attempt a comparison with just a single "unknown" image - the accused's. Yes, I do use this specific provision of English Common Law to explain the problem here. Perverting the Course of Justice can be any of three acts, fabricating or disposing of evidence, intimidating or threatening a witness or juror, intimidating or threatening a judge. In this case, one Perverts the Course of Justice when one fabricates a conclusion (scientific evidence) where none is possible.
Back to the experiment. How many "unknown" images would you need to approach 99% confidence in your results, thus assisting the course of justice? Answer = 52. How did I come up with 52?
Exact - • Generic binomial test
Analysis: A priori: Compute required sample size
Input: Tail(s) = One
Proportion p2 = 0.8
α err prob = 0.01
Power (1-β err prob) = .99
Proportion p1 = 0.5
Output: Lower critical N = 35.0000000
Upper critical N = 35.0000000
Total sample size = 52
Actual power = 0.9901396
Actual α = 0.008766618
A generic binomial test is similar to the flip of a coin - only two possible outcomes, heads / tails or match / no match. It's the simplest test to perform.
The error probability is your chance of being wrong. At 52 test images, you've got a 1 in 100 chance of being wrong (.99). As you move below 15 test images, you have a greater chance of being wrong than being right. With a sample size of 1, you're likely more accurate tossing a coin.
The 52 samples help us to get to make / model / year. You may chose to refresh those samples with new ones to perform a "blind comparison," and attempt to "include" the suspect's vehicle in your findings. To do this, you'd need the specific description of the "known" vehicle that makes it unique vs the others in the sample.
If I were performing a make / model / year determination, and then a comparison, I would note any errors or limitations in my report. If the data supported no conclusion, or if the limitations in the data prevented me from arriving at a determination, I would note that the data supported no conclusion. If I was able to make a determination, I would have noted my process and how I arrived at the conclusion (in a reliable, valid, and reproducible fashion).
The problem with the reporting of the case is the "cannot be excluded" portion is in the headline. One has to read deeper into the article to find, "... Liscio denied (that his conclusions may have been formed to fit the bias of the prosecution, who was paying him...), and reminded McGee more than once that he had not identified the truck specifically as Merritt’s..."
Which requires another question be asked, if the analyst had not identified the vehicle, what was he doing there in testimony?
"Among the items that helped to reach the conclusion that the vehicle was “consistent” with Merritt’s truck was a glint caught by the video that matched the position of a latch on a passenger-side storage box toward the rear of the truck, said Liscio,who uses 3D imagery."
Here we move from "cannot be excluded" to "consistent with," another equivocation. How does one not identify a vehicle, but find that said unknown vehicle is "consistent with" the "known" vehicle? This is the problem with Demonstrative Comparisons. When you place a single "known" against a single "unknown" in a demonstrative exhibit, you are making a choice as to what to include in your exhibit - thus you have concluded.
Back to the demonstrative. What is it about the latch on the side of the truck that is unique? Won't all work trucks of this type have latches on their cargo containers? Why is this one so special that it can only be found on the accused's truck? Of these questions, the article does not give an answer.
"I’m not saying that this your client’s vehicle,” Liscio repeated. “All I am saying is that the vehicle in question is consistent with my report, and if there is another vehicle that looks similar, that is possible.” How about at least 326 vehicles found on just two used car web sites?
If you'd like to explore these topics in depth, I'd invite you to sign up for any one (or all) of our upcoming training sessions. Our Statistics for Forensic Analysts course is offered on-line as micro learning and thus enrollment can happen at your convenience. Our other courses can be facilitated at your location or at ours, in Henderson, NV.
When the data supports no conclusion, I try to quantify why. I usually note the nominal resolution and / or the particular defect that may be getting in the way - blur, obstruction, etc. What I don't do is equivocate. Quite the opposite, I try to be very specific as to why the question can't be answered. In this way, there is no ambiguity in my conclusion(s).
Additionally, whomever is responsible for the source of the evidence will have insight as to potential improvements to the situation. For example, if the question is "what is license plate," and the camera is positioned to monitor a parking lot, then the person / company responsible for the security infrastructure can be alerted to the potential need for additional coverage in the area of interest. This could mean additional cameras, or a change in lensing, or ...
Why this topic today?
I received a link to this article in my inbox. "Expert says Merritt truck ‘cannot be excluded’ as vehicle on McStay neighborhood video." Really? "Cannot be excluded?" What does, "cannot be excluded" mean?
At issue is a 2000 Chevy work truck, similar to the one below. Chevy makes some of the most popular trucks in the US, second only to Ford. This case takes place in Southern California, home to more than 20m people from Kern to San Diego counties.
"Cannot be excluded" is not a conclusion, it's an equivocation. Search CarFax.com for used Chevy Silverado 3500 HD trucks for sale within a 100 mile radius of Fallbrook, CA (92028). I did. I found 49 trucks for sale on that site. 49 trucks that "cannot be excluded" ... AutoTrader listed 277 trucks for sale. 277 more trucks that "cannot be excluded" ...
All of this requires us to ask a question, is the goal of a comparative analysis to "exclude" or to "include?" How would you know if you've never taken a course in comparative analysis? Perhaps we can start with the SWGDE Best Practices for Photographic Comparison for All Disciplines. Version: 1.1 (July 18, 2017) (link):
Class Characteristic – A feature of an object that is common to a group of objects.
Individualizing Characteristic – A feature of an object that contributes to differentiating that object from others of its class.
5.2 Examine the photographs to determine if they are sufficient quality to complete an examination, and if the quality will have an effect on the degree to which an examination can be completed. Specific disciplines should define quality criteria, when possible, and how a failure to meet the specified quality criterion will impact results. (This may apply to a portion of the image, or the image as a whole.)
5.2.1 If the specified quality criteria are not met, determine if it is possible to obtain additional images. If the specified quality criteria are not met, and additional images cannot be obtained, this may preclude the examiner from conducting an examination, or the results of the examination may be limited.
5.3 Enhance images as necessary. Refer to ASTM Guide E2825 for Forensic Digital Image Processing.
These steps are the essence of the Content Triage step in the workflow - do I have enough nominal resolution to continue processing and reach a conclusion?
But, there is more to this process than just a comparison of a "known" and an "unknown." How does one go from "unknown" to a "known" for a comparison? How do you "know" what is "known?" First, you must attempt a Vehicle Make / Model Determination of the vehicle in the CCTV footage.
For a Vehicle Make / Model Determination, the SWGDE Vehicle Make/Model Comparison Form. Version: 1.0 (July 11, 2018) (source) is quite helpful.
How many features are shared between model years in a specific manufacturer's product line? Class Characteristics can help get you to "truck," then to "work truck" (presence of exterior cargo containers not typically present in a basic pickup truck), then to "make" based on shapes and positions of features of the items found in the Comparison form. The form can be used to document your findings.
You may get to Make, but getting to Model in low resolution images and video can be frustrating. What's the difference between a Chevy Silverado 1500, 1500LD, 2500HD, 3500HD? There are more than 10 trim variations of the 1500 series alone. What's the difference between a 2500HD and a 3500HD?
After you've documented your process of going from "object" to "work truck" to a specific model of work truck, how do you move beyond class, to make, to model, to year, to a specific truck? Remember, an Individualizing Characteristic is a feature of an object that contributes to differentiating that object from others of its class. Before you say "headlight spread pattern," please know that there is no valid research supporting "headlight spread pattern" as an individualizing characteristic - NONE. I know that there are cases where this technique has been used, but rhetoric is not science. Many jurisdictions, such as California and Georgia, will allow just about everything in at trial, so not having one's testimony excluded at trial is not proof of anything scientific.
Taking your CCTV footage, you've made your make / model / year determination using the SWGDE's form. Now, how do you move to an individual truck?
This is where basic statistics and inferential reasoning are quite necessary. Do you have sufficient nominal resolution to pick out identifying characteristics in the footage? If not, you're done. The data supports no conclusion as to individualization.
But assuming that you do, how do you work scientifically and as bias free as possible? Unpack the biasing information that you received from your "client" and design an experiment. In the US, given our Constitutional provisions that the accused are innocent until proven guilty, it is for the prosecution to prove guilt. Thus, the staring point for your experiment is that the truck in question is not a match. With sufficient nominal resolution, you set about to prove that there is a match. If you can't, there is no match as far as you're concerned. Remember, the comparative analysis should not be influenced by any other factors or items of evidence.
In designing the experiment, you'll need a sample set of images. You see, a simple "match / no match" comparison needs an adequate sample. It perverts the course of justice to simply attempt the comparison on the accused's vehicle. We don't do witness ID line-ups with just the suspect. Neither should anyone attempt a comparison with just a single "unknown" image - the accused's. Yes, I do use this specific provision of English Common Law to explain the problem here. Perverting the Course of Justice can be any of three acts, fabricating or disposing of evidence, intimidating or threatening a witness or juror, intimidating or threatening a judge. In this case, one Perverts the Course of Justice when one fabricates a conclusion (scientific evidence) where none is possible.
Back to the experiment. How many "unknown" images would you need to approach 99% confidence in your results, thus assisting the course of justice? Answer = 52. How did I come up with 52?
Exact - • Generic binomial test
Analysis: A priori: Compute required sample size
Input: Tail(s) = One
Proportion p2 = 0.8
α err prob = 0.01
Power (1-β err prob) = .99
Proportion p1 = 0.5
Output: Lower critical N = 35.0000000
Upper critical N = 35.0000000
Total sample size = 52
Actual power = 0.9901396
Actual α = 0.008766618
A generic binomial test is similar to the flip of a coin - only two possible outcomes, heads / tails or match / no match. It's the simplest test to perform.
The error probability is your chance of being wrong. At 52 test images, you've got a 1 in 100 chance of being wrong (.99). As you move below 15 test images, you have a greater chance of being wrong than being right. With a sample size of 1, you're likely more accurate tossing a coin.
The 52 samples help us to get to make / model / year. You may chose to refresh those samples with new ones to perform a "blind comparison," and attempt to "include" the suspect's vehicle in your findings. To do this, you'd need the specific description of the "known" vehicle that makes it unique vs the others in the sample.
If I were performing a make / model / year determination, and then a comparison, I would note any errors or limitations in my report. If the data supported no conclusion, or if the limitations in the data prevented me from arriving at a determination, I would note that the data supported no conclusion. If I was able to make a determination, I would have noted my process and how I arrived at the conclusion (in a reliable, valid, and reproducible fashion).
The problem with the reporting of the case is the "cannot be excluded" portion is in the headline. One has to read deeper into the article to find, "... Liscio denied (that his conclusions may have been formed to fit the bias of the prosecution, who was paying him...), and reminded McGee more than once that he had not identified the truck specifically as Merritt’s..."
Which requires another question be asked, if the analyst had not identified the vehicle, what was he doing there in testimony?
"Among the items that helped to reach the conclusion that the vehicle was “consistent” with Merritt’s truck was a glint caught by the video that matched the position of a latch on a passenger-side storage box toward the rear of the truck, said Liscio,who uses 3D imagery."
Here we move from "cannot be excluded" to "consistent with," another equivocation. How does one not identify a vehicle, but find that said unknown vehicle is "consistent with" the "known" vehicle? This is the problem with Demonstrative Comparisons. When you place a single "known" against a single "unknown" in a demonstrative exhibit, you are making a choice as to what to include in your exhibit - thus you have concluded.
Back to the demonstrative. What is it about the latch on the side of the truck that is unique? Won't all work trucks of this type have latches on their cargo containers? Why is this one so special that it can only be found on the accused's truck? Of these questions, the article does not give an answer.
"I’m not saying that this your client’s vehicle,” Liscio repeated. “All I am saying is that the vehicle in question is consistent with my report, and if there is another vehicle that looks similar, that is possible.” How about at least 326 vehicles found on just two used car web sites?
If you'd like to explore these topics in depth, I'd invite you to sign up for any one (or all) of our upcoming training sessions. Our Statistics for Forensic Analysts course is offered on-line as micro learning and thus enrollment can happen at your convenience. Our other courses can be facilitated at your location or at ours, in Henderson, NV.
Thursday, May 16, 2019
The need for objective measures in forensic evidence
The latest edition of Significance, the journal of the Royal Statistical Society, features an article on the importance of objective measures and standards in evaluating evidence. Whilst many of the points made by the author are valid and applicable, practitioners within the digital / multimedia forensic sciences already have standards and guidelines for basing their work on an objective footing.
Investigators want to use video evidence to "identify" an object in a video - gun, knife, etc. But this "identification" is often complicated by a lack of nominal resolution in the region of interest. Many will simply state that the object has been identified, without actually taking the steps to formally conclude (prove) their assertions. They go with what they "know" (abductive reasoning), as opposed to what they can prove. There's a better way.
First off, here are some relevant definitions from the SWGDE Digital & Multimedia Evidence Glossary. Version: 3.0 (June 23, 2016)
Artifact: A visual/aural aberration in an image, video, or audio recording resulting from a technical or operational limitation. Examples include speckles in a scanned picture or “blocking” in images compressed using the JPEG standard.
Image Analysis: The application of image science and domain expertise to examine and interpret the content of an image, the image itself, or both in legal matters.
Image Comparison (Photographic Comparison): The process of comparing images of questioned objects or persons to known objects or persons or images thereof, and making an assessment
of the correspondence between features in these images for rendering an opinion regarding identification or elimination.
Image Content Analysis: The drawing of conclusions about an image. Targets for content analysis include, but are not limited to: the subjects/objects within an image; the conditions under which, or the process by which, the image was captured or created; the physical aspects of the scene (e.g., lighting or composition); and/or the provenance of the image.
Nominal resolution: The numerical value of pixels per inch as opposed to the achievable resolution of the imaging device. In the case of flatbed scanners, it is based on the resolution setting in the software controlling the scanner. In the case of digital cameras, this refers to the number of pixels of the camera sensor divided by the corresponding vertical and horizontal dimension of the area photographed.
Video Analysis: The scientific examination, comparison, and/or evaluation of video in legal matters.
It's also important to define "forensic science." For this, I'll refer to "A Framework to Harmonize Forensic Science Practices and Digital/Multimedia Evidence." OSAC Task Group on Digital/Multimedia Science. 2017: "Forensic science is the systematic and coherent study of traces to address questions of authentication, identification, classification, reconstruction, and evaluation for a legal context."
What is a trace? "A trace is any modification, subsequently observable, resulting from an event. You walk within the view of a CCTV system, you leave a trace of your presence within that system."
SWGDE Best Practices for Photographic Comparison for All Disciplines. Version: 1.1 (July 18, 2017) provides a few more definitions.
Class Characteristic: A feature of an object that is common to a group of objects.
Individualizing Characteristic: A feature of an object that contributes to differentiating that object from others of its class.
With the definitions in mind, SWGDE Best Practices for Image Content Analysis. Version: 1.0 (February 21, 2017) provides the framework to begin the Content Triage step in the workflow - can I answer the question with the evidence file?
5. Evidence Preparation
5.2 Based on the request for examination, determine if submitted imagery is available to complete requested analysis. Determine whether submitted imagery is of sufficient quality to complete the requested examination, or if the image quality will have an effect on the degree to which an examination can be completed.
5.2.1 If the specified quality criteria are not met, determine if it is possible to obtain additional images. If the specified quality criteria are not met, and additional images cannot be obtained, this may preclude the examiner from conducting an examination, or the results of the examination may be limited.
(do I have sufficient nominal resolution within the target area to fulfill the request - answer the question? The nominal resolution, measured in pixels per inch or inches per pixel - or it's metric equivalent, is the objective measure.)
6. Examinations Method
There is no one specific methodology for content analysis. The methodology for analysis will primarily be derived to answer the requested examination. However, any methodology applied to content analysis should incorporate an analysis of the imagery, the cataloguing of relevant features, an evaluation of the significance of the detected features, an evaluation of the limiting factors of the imagery, the formation of a conclusion, and a verification of the analysis. The repeatability (and / or reproducibility) of the procedure and documentation of the workflow is of paramount importance. Documentation should be performed contemporaneously.
6.1 Assess the contents of the image, to determine whether factors are present that can answer the examination request. The examination request generally will fall into one of the following categories:
6.1.1 Analysis to determine the conditions under which, or the process by which, the image was captured or created. Examples include, but are not limited to, the limitations of the recording device, and the inclusion of artifacts based on the file format or compression. This can help to answer the question “How does the recording system affect what is visible in the scene?”
6.1.2 Analysis to determine the physical aspects of the scene, including events captured. Examples include, but are not limited to, the lighting and composition of the scene, the presence of specific objects within the scene, a determination of the interaction between objects in the scene, and a description of events within a scene. This can help to answer the questions “Is a specific object visible in the scene?” or “What happened in the scene?”
6.1.3 Analysis to determine the classification of an object within an image. Examples include, but are not limited to, the make, model, and year of a vehicle, the determination of a manufacturing logo, and the determination of the brand and model of a weapon. This can help to answer the question “What is the object visible in the scene?”
6.1.4 Analysis to determine the location or setting of the image content. Examples may include either a general setting (e.g. Portland, Oregon) or a specific setting (e.g. Conference Room 23, the Northwest Corner). This can help answer the question “Where is the scene?”
6.3 Assess the image for features that contribute to the ability to form a conclusion, and record observed features. Consider the weight or importance of identified features, in order to determine the focus of the examination. Examples of features may include logos, shapes, reflections, or specific items.
How does one move from "artifact" (6.1.1) to "object" (6.1.3)?
As regards “artifact:"
When nominal resolution is low, the analyst must be certain of the provenance of the features within the region of interest in the evidence item. Are the “items of interest” actual representations of things, or are they an “aberration” in the image, resulting from problems in compression and etc. In order to move from “a few dark blocks of pixels” is not an “artifact” to the "few dark blocks of pixels” is an “object” - the rules for classifying an object apply. Additionally, one must rule out the possibility that what one sees is not an artifact. This must be done in a systematic / coherent fashion in order to be a forensic science exercise.
As regards “object:"
In order to move from “a few dark blocks of pixels” that are not a result of an “aberration" to firearm, there needs to be Class Characteristics present in the view that would lead to the conclusion of "object." In order to form a conclusion, those Class Characteristics should be identified by the analyst - overall shape: the presence of features of an object that contributes to differentiating that object from others of its class Is the nominal resolution in the region of interest sufficient to conclude as to class and individualizing characteristics? This is the objective measure.
The standards and guidelines are there. It is left to the investigator, the analyst, the attorney, and the Trier of Fact to follow the rules and accept the facts as they present themselves. What does the investigator do when told by the analyst that the nominal resolution does not support a conclusion? Does he / she shop around for another analyst? The investigator must not fall victim to cognitive perseverance (bias) when faced with this new information.
Additionally, these stakeholders must understand when they're on sound scientific footing or when they're just taking their best shot.
Abductive reasoning: taking your best shot
Deductive reasoning: conclusion guaranteed
Inductive reasoning: conclusion merely likely
What most in the forensic sciences don't quite acknowledge is that they operate in the world of abductive reasoning most of the time.
From a US Constitution standpoint, from the standpoint of Constitutional Policing, it is for the side offering an item as “proof” of some condition (e.g., object = firearm) to actually prove class = firearm. It perverts the cause of justice to require the defense to prove that it isn’t.
This is the level of detail that we get into when I present training in the digital / multimedia forensic sciences, as well as my other educational offerings. If you need help on a case, or want a second opinion, feel free to contact me today.
Investigators want to use video evidence to "identify" an object in a video - gun, knife, etc. But this "identification" is often complicated by a lack of nominal resolution in the region of interest. Many will simply state that the object has been identified, without actually taking the steps to formally conclude (prove) their assertions. They go with what they "know" (abductive reasoning), as opposed to what they can prove. There's a better way.
First off, here are some relevant definitions from the SWGDE Digital & Multimedia Evidence Glossary. Version: 3.0 (June 23, 2016)
Artifact: A visual/aural aberration in an image, video, or audio recording resulting from a technical or operational limitation. Examples include speckles in a scanned picture or “blocking” in images compressed using the JPEG standard.
Image Analysis: The application of image science and domain expertise to examine and interpret the content of an image, the image itself, or both in legal matters.
Image Comparison (Photographic Comparison): The process of comparing images of questioned objects or persons to known objects or persons or images thereof, and making an assessment
of the correspondence between features in these images for rendering an opinion regarding identification or elimination.
Image Content Analysis: The drawing of conclusions about an image. Targets for content analysis include, but are not limited to: the subjects/objects within an image; the conditions under which, or the process by which, the image was captured or created; the physical aspects of the scene (e.g., lighting or composition); and/or the provenance of the image.
Nominal resolution: The numerical value of pixels per inch as opposed to the achievable resolution of the imaging device. In the case of flatbed scanners, it is based on the resolution setting in the software controlling the scanner. In the case of digital cameras, this refers to the number of pixels of the camera sensor divided by the corresponding vertical and horizontal dimension of the area photographed.
Video Analysis: The scientific examination, comparison, and/or evaluation of video in legal matters.
It's also important to define "forensic science." For this, I'll refer to "A Framework to Harmonize Forensic Science Practices and Digital/Multimedia Evidence." OSAC Task Group on Digital/Multimedia Science. 2017: "Forensic science is the systematic and coherent study of traces to address questions of authentication, identification, classification, reconstruction, and evaluation for a legal context."
What is a trace? "A trace is any modification, subsequently observable, resulting from an event. You walk within the view of a CCTV system, you leave a trace of your presence within that system."
SWGDE Best Practices for Photographic Comparison for All Disciplines. Version: 1.1 (July 18, 2017) provides a few more definitions.
Class Characteristic: A feature of an object that is common to a group of objects.
Individualizing Characteristic: A feature of an object that contributes to differentiating that object from others of its class.
With the definitions in mind, SWGDE Best Practices for Image Content Analysis. Version: 1.0 (February 21, 2017) provides the framework to begin the Content Triage step in the workflow - can I answer the question with the evidence file?
5. Evidence Preparation
5.2 Based on the request for examination, determine if submitted imagery is available to complete requested analysis. Determine whether submitted imagery is of sufficient quality to complete the requested examination, or if the image quality will have an effect on the degree to which an examination can be completed.
5.2.1 If the specified quality criteria are not met, determine if it is possible to obtain additional images. If the specified quality criteria are not met, and additional images cannot be obtained, this may preclude the examiner from conducting an examination, or the results of the examination may be limited.
(do I have sufficient nominal resolution within the target area to fulfill the request - answer the question? The nominal resolution, measured in pixels per inch or inches per pixel - or it's metric equivalent, is the objective measure.)
6. Examinations Method
There is no one specific methodology for content analysis. The methodology for analysis will primarily be derived to answer the requested examination. However, any methodology applied to content analysis should incorporate an analysis of the imagery, the cataloguing of relevant features, an evaluation of the significance of the detected features, an evaluation of the limiting factors of the imagery, the formation of a conclusion, and a verification of the analysis. The repeatability (and / or reproducibility) of the procedure and documentation of the workflow is of paramount importance. Documentation should be performed contemporaneously.
6.1 Assess the contents of the image, to determine whether factors are present that can answer the examination request. The examination request generally will fall into one of the following categories:
6.1.1 Analysis to determine the conditions under which, or the process by which, the image was captured or created. Examples include, but are not limited to, the limitations of the recording device, and the inclusion of artifacts based on the file format or compression. This can help to answer the question “How does the recording system affect what is visible in the scene?”
6.1.2 Analysis to determine the physical aspects of the scene, including events captured. Examples include, but are not limited to, the lighting and composition of the scene, the presence of specific objects within the scene, a determination of the interaction between objects in the scene, and a description of events within a scene. This can help to answer the questions “Is a specific object visible in the scene?” or “What happened in the scene?”
6.1.3 Analysis to determine the classification of an object within an image. Examples include, but are not limited to, the make, model, and year of a vehicle, the determination of a manufacturing logo, and the determination of the brand and model of a weapon. This can help to answer the question “What is the object visible in the scene?”
6.1.4 Analysis to determine the location or setting of the image content. Examples may include either a general setting (e.g. Portland, Oregon) or a specific setting (e.g. Conference Room 23, the Northwest Corner). This can help answer the question “Where is the scene?”
6.3 Assess the image for features that contribute to the ability to form a conclusion, and record observed features. Consider the weight or importance of identified features, in order to determine the focus of the examination. Examples of features may include logos, shapes, reflections, or specific items.
How does one move from "artifact" (6.1.1) to "object" (6.1.3)?
As regards “artifact:"
When nominal resolution is low, the analyst must be certain of the provenance of the features within the region of interest in the evidence item. Are the “items of interest” actual representations of things, or are they an “aberration” in the image, resulting from problems in compression and etc. In order to move from “a few dark blocks of pixels” is not an “artifact” to the "few dark blocks of pixels” is an “object” - the rules for classifying an object apply. Additionally, one must rule out the possibility that what one sees is not an artifact. This must be done in a systematic / coherent fashion in order to be a forensic science exercise.
As regards “object:"
In order to move from “a few dark blocks of pixels” that are not a result of an “aberration" to firearm, there needs to be Class Characteristics present in the view that would lead to the conclusion of "object." In order to form a conclusion, those Class Characteristics should be identified by the analyst - overall shape: the presence of features of an object that contributes to differentiating that object from others of its class Is the nominal resolution in the region of interest sufficient to conclude as to class and individualizing characteristics? This is the objective measure.
The standards and guidelines are there. It is left to the investigator, the analyst, the attorney, and the Trier of Fact to follow the rules and accept the facts as they present themselves. What does the investigator do when told by the analyst that the nominal resolution does not support a conclusion? Does he / she shop around for another analyst? The investigator must not fall victim to cognitive perseverance (bias) when faced with this new information.
Additionally, these stakeholders must understand when they're on sound scientific footing or when they're just taking their best shot.
![]() |
Types of Resoning |
Deductive reasoning: conclusion guaranteed
Inductive reasoning: conclusion merely likely
What most in the forensic sciences don't quite acknowledge is that they operate in the world of abductive reasoning most of the time.
From a US Constitution standpoint, from the standpoint of Constitutional Policing, it is for the side offering an item as “proof” of some condition (e.g., object = firearm) to actually prove class = firearm. It perverts the cause of justice to require the defense to prove that it isn’t.
This is the level of detail that we get into when I present training in the digital / multimedia forensic sciences, as well as my other educational offerings. If you need help on a case, or want a second opinion, feel free to contact me today.
Thursday, April 25, 2019
Res ipsa loquitor
In the common law of torts, res ipsa loquitur (Latin for "the thing speaks for itself") is a doctrine that infers negligence from the very nature of an accident or injury in the absence of direct evidence on how any defendant behaved. Although modern formulations differ by jurisdiction, common law originally stated that the accident must satisfy the necessary elements of negligence: duty, breach of duty, causation, and injury. In res ipsa loquitur, the elements of duty of care, breach, and causation are inferred from an injury that does not ordinarily occur without negligence.
Res ipsa loquitur is often confused with prima facie ("at first sight"), the common law doctrine that a party must show some minimum amount of evidence before a trial is worthwhile.
(Source)
What on earth does this have to do with multimedia analysis? In so many cases, attorneys argue and judges agree that a video / image is what it is - that it speaks for itself - that an analyst is not needed to explain crucial elements of the evidence item.
Take this image:
The upper section is a depiction of the Chuvash State Opera and Ballet Theater in Cheboksary, Russia. This icon of Brutalist architecture is one of the great examples of Brutalism in Russia, and a must see for tourists.
The lower section layers in (Photoshops) elements from Star Wars.
Under res ipsa loquitur - the lower section speaking for itself - it could be argued that Imperial troops have occupied central Russia. This is, of course a ridiculous idea. Here, we're using absurdity to illustrate the absurd.
In terms of modern video evidence - can the video indeed speak for itself? Is the object of interest an artifact of compression, a result of noise, or an element of the scene? How would you know? You would engage an analyst - like me.
Any case involving Forensic Photographic Comparison or Forensic Content Analysis must necessarily start with a "ground truth assessment;" what are the elements of the area of interest. A macro block analysis is often performed. Is the area of interest populated by lossless encoded data, copied data, predicted data, or error? How would you know? You would know by using validated tools and your trained mind / eyes.
In Authentication cases, this is no different. Fakes are getting sophisticated and easier to perform. Analysts need valid tools, training, and education. This is why the training / educational sessions that I offer move beyond simple button pushing into a deeper understanding of the underlying technology as well as how forgeries are created in the first place. Because "false positives" and "false negatives" can be frequent in authentication tools that are built around data sets, one must know how to interpret results as well as how to successfully validate them across a wider variety of methods that do not only involve the software being used. This is why statistics goes hand in hand with the domains of analysis.
If you're interested in training / education, you can sign up for our on-going / upcoming offerings on the web site. Seats are available. Also, with our on-line micro learning offerings, seats are always available as you learn at your own location.
Res ipsa loquitur is often confused with prima facie ("at first sight"), the common law doctrine that a party must show some minimum amount of evidence before a trial is worthwhile.
(Source)
What on earth does this have to do with multimedia analysis? In so many cases, attorneys argue and judges agree that a video / image is what it is - that it speaks for itself - that an analyst is not needed to explain crucial elements of the evidence item.
Take this image:
The upper section is a depiction of the Chuvash State Opera and Ballet Theater in Cheboksary, Russia. This icon of Brutalist architecture is one of the great examples of Brutalism in Russia, and a must see for tourists.
The lower section layers in (Photoshops) elements from Star Wars.
Under res ipsa loquitur - the lower section speaking for itself - it could be argued that Imperial troops have occupied central Russia. This is, of course a ridiculous idea. Here, we're using absurdity to illustrate the absurd.
In terms of modern video evidence - can the video indeed speak for itself? Is the object of interest an artifact of compression, a result of noise, or an element of the scene? How would you know? You would engage an analyst - like me.
Any case involving Forensic Photographic Comparison or Forensic Content Analysis must necessarily start with a "ground truth assessment;" what are the elements of the area of interest. A macro block analysis is often performed. Is the area of interest populated by lossless encoded data, copied data, predicted data, or error? How would you know? You would know by using validated tools and your trained mind / eyes.
In Authentication cases, this is no different. Fakes are getting sophisticated and easier to perform. Analysts need valid tools, training, and education. This is why the training / educational sessions that I offer move beyond simple button pushing into a deeper understanding of the underlying technology as well as how forgeries are created in the first place. Because "false positives" and "false negatives" can be frequent in authentication tools that are built around data sets, one must know how to interpret results as well as how to successfully validate them across a wider variety of methods that do not only involve the software being used. This is why statistics goes hand in hand with the domains of analysis.
If you're interested in training / education, you can sign up for our on-going / upcoming offerings on the web site. Seats are available. Also, with our on-line micro learning offerings, seats are always available as you learn at your own location.
Thursday, April 4, 2019
The "NAS Report," 10 years later
It's been a bit over 10 years since Strengthening Forensic Science in the United States, a Path Forward was released (link). What's changed since then?
As a practitioner and an educator, Chapter 8 was particularly significant. Here's what we should all know - here's what we're all responsible for knowing.
"Forensic Examiners must understand the principles, practices, and contexts of science, including the scientific method. Training should move away from the reliance on the apprentice-like transmittal of practices to education ..."
10 years later, has the situation changed? 10 years later, there are a ton of "apprentice-like" certification programs and just a handful of college programs. College programs are expensive and time consuming. Mid-career professionals don't have the time to sit in college classes for years. Mid-career professionals don't have the money to pay for college. Those that have retired from public service and "re-entered" their profession on the private side face the same challenges.
Years ago, I designed a curriculum set for an undergraduate education in the forensic sciences. It was designed such that it could form the foundation for graduate work in any of the forensic sciences - with electives being being discipline specific. I made the rounds of schools. I made the rounds of organizations like LEVA. I made my pitch. I got nowhere with it. Colleges are non-profit in name only, it seems.
To address the problems of time and money, as well as proximity, I've moved the box of classes on-line. The first offering is out now - Statistics for Forensic Analysts. It's micro learning. It's "consumer level," not "egg head" level. It's paced comfortably. It's priced reasonably. It's entirely relevant to the issues raised in the NAS Report, as well as the issues raised in this month's edition of Significance Magazine.
I encourage you to take a look at Statistics For Forensic Analysts. If you've read the latest issue of Significance, and you have no idea what they're talking about or why what they're talking about is vitally important, you need to take our stats class. Check out the syllabus and sign-up today.
As a practitioner and an educator, Chapter 8 was particularly significant. Here's what we should all know - here's what we're all responsible for knowing.
"Forensic Examiners must understand the principles, practices, and contexts of science, including the scientific method. Training should move away from the reliance on the apprentice-like transmittal of practices to education ..."
10 years later, has the situation changed? 10 years later, there are a ton of "apprentice-like" certification programs and just a handful of college programs. College programs are expensive and time consuming. Mid-career professionals don't have the time to sit in college classes for years. Mid-career professionals don't have the money to pay for college. Those that have retired from public service and "re-entered" their profession on the private side face the same challenges.
Years ago, I designed a curriculum set for an undergraduate education in the forensic sciences. It was designed such that it could form the foundation for graduate work in any of the forensic sciences - with electives being being discipline specific. I made the rounds of schools. I made the rounds of organizations like LEVA. I made my pitch. I got nowhere with it. Colleges are non-profit in name only, it seems.
To address the problems of time and money, as well as proximity, I've moved the box of classes on-line. The first offering is out now - Statistics for Forensic Analysts. It's micro learning. It's "consumer level," not "egg head" level. It's paced comfortably. It's priced reasonably. It's entirely relevant to the issues raised in the NAS Report, as well as the issues raised in this month's edition of Significance Magazine.
I encourage you to take a look at Statistics For Forensic Analysts. If you've read the latest issue of Significance, and you have no idea what they're talking about or why what they're talking about is vitally important, you need to take our stats class. Check out the syllabus and sign-up today.
Wednesday, April 3, 2019
Why do you need science?
An interesting morning's mail. Two articles released overnight deal with forensic video analysis. Two different angles on the subject.
First off, there's the "advertorial" for the LEVA / IAI certification programs in the Police Chief Magazine.
The pitch for certification was complicated by this image:
The caption for the image further complicated the message for me: "Proper training is required to accurately recover or enhance low-resolution video and images, as well as other visual complexities."
Whilst the statement is true, do you really believe that the improvements to the image happened from the left to the right? Perhaps, for editorial purposes, the image was degraded, from the original (R) to the result (L). If I'm wrong about this, I'd love to see the case notes and the specifics as to the original file. Can you imagine such a result coming from the average CCTV file? Hardly.
Next in the bin was an opinion piece in the Washington Post's Radley Balko - "Journalists need to stop enabling junk forensics." It's seemingly the rebuttal to the LEVA / IAI piece.
Balko picks up where the ProPublica series left off - an examination of the discipline in general, and Dr. Vorder Brugge of the FBI in particular. It's an opinion piece, and it's rather pointed in it's opinion of the state of the discipline. Balko, like ProPublica, has been on this for a while now (here's another Balko piece on the state of forensic science in the US).
I don't disagree with any of the referenced authors here. Not one bit. Jan and Kim are correct in that the Trier of Fact needs competent analysts working cases. Balko is correct in that the US still rather sucks at science. That we suck as science was the main reason the Obama administration created the OSAC and the reason Texas created it's licensing scheme for analysts.
Where I think I disagree with Jan and Kim is essentially a legacy of the Daubert decision. Daubert seemingly outsourced the qualification process to third parties. It gave rise to the certification mills and to industry training programs. Training to competency means different things to different organizations. For example, I've been trained to competency on the use of Amped's FIVE and Authenticate. But, none of that training included the underlying science behind how the tool is used in case work. For that, I had to go elsewhere. But, Amped Software, Inc, certified me as a user and a trainer of the tools. That (certification) was just a step in the journey to competency, not the destination.
Balko, like ProPublica, notes the problems with pattern evidence exams. Their points are valid. But, it doesn't mean that image comparison can't be accomplished. It does mean that image comparisons should be founded in science. One of those sciences is certainly image science (knowing the constituent parts of the image / video and how the evidence item was created, transmitted, stored, retrieved, etc. But another one of the sciences necessary is statistics (and experimental design).
As I noted in my letter to the editor of the Journal of Forensic Identification, experimental design and statistics form a vital part of any analysis. For pattern matching, the evidence item may match the CCTV footage. But, would a representative sample of similar items (shirts, jeans, etc) also match? Can you calculate probabilities if you're unaware of the denominator in the function (what's the population of items)? Did you calculate the sample size properly for the given test? Do you have access to a sample set? If not, did you note these limitations in your report? Did these limitations inform your conclusions?
Both LEVA and the IAI have a requirement for their certified analysts to seek and complete additional training / education towards eventual recertification. This is a good thing. But, as many of us know, there are only so many training opportunities. At some point, you kind of run out of classes to take is a common refrain. Whilst this may be true for "training" (tool / discipline specific), this is so not true for education. There are a ton of classes out there to inform one's work. The problem there becomes price / availability. This price / availability problem was the primary driver behind my taking my Statistics class out of the college context and putting it on-line as micro learning. My other classes from my "curriculum in a box" concept will roll out later this year and into the next year.
So to the point of the morning's articles - yes, you do need a trained / educated analyst. Yes, that analyst needs to engage in a scientific experiment - governed both by image science as well as experimental science. Forensic science can be science, if it's conducted scientifically. Otherwise, it becomes a rhetorical exercise utilizing demonstratives to support it's unreproducible claims.
First off, there's the "advertorial" for the LEVA / IAI certification programs in the Police Chief Magazine.
The pitch for certification was complicated by this image:
The caption for the image further complicated the message for me: "Proper training is required to accurately recover or enhance low-resolution video and images, as well as other visual complexities."
Whilst the statement is true, do you really believe that the improvements to the image happened from the left to the right? Perhaps, for editorial purposes, the image was degraded, from the original (R) to the result (L). If I'm wrong about this, I'd love to see the case notes and the specifics as to the original file. Can you imagine such a result coming from the average CCTV file? Hardly.
Next in the bin was an opinion piece in the Washington Post's Radley Balko - "Journalists need to stop enabling junk forensics." It's seemingly the rebuttal to the LEVA / IAI piece.
Balko picks up where the ProPublica series left off - an examination of the discipline in general, and Dr. Vorder Brugge of the FBI in particular. It's an opinion piece, and it's rather pointed in it's opinion of the state of the discipline. Balko, like ProPublica, has been on this for a while now (here's another Balko piece on the state of forensic science in the US).
I don't disagree with any of the referenced authors here. Not one bit. Jan and Kim are correct in that the Trier of Fact needs competent analysts working cases. Balko is correct in that the US still rather sucks at science. That we suck as science was the main reason the Obama administration created the OSAC and the reason Texas created it's licensing scheme for analysts.
Where I think I disagree with Jan and Kim is essentially a legacy of the Daubert decision. Daubert seemingly outsourced the qualification process to third parties. It gave rise to the certification mills and to industry training programs. Training to competency means different things to different organizations. For example, I've been trained to competency on the use of Amped's FIVE and Authenticate. But, none of that training included the underlying science behind how the tool is used in case work. For that, I had to go elsewhere. But, Amped Software, Inc, certified me as a user and a trainer of the tools. That (certification) was just a step in the journey to competency, not the destination.
Balko, like ProPublica, notes the problems with pattern evidence exams. Their points are valid. But, it doesn't mean that image comparison can't be accomplished. It does mean that image comparisons should be founded in science. One of those sciences is certainly image science (knowing the constituent parts of the image / video and how the evidence item was created, transmitted, stored, retrieved, etc. But another one of the sciences necessary is statistics (and experimental design).
As I noted in my letter to the editor of the Journal of Forensic Identification, experimental design and statistics form a vital part of any analysis. For pattern matching, the evidence item may match the CCTV footage. But, would a representative sample of similar items (shirts, jeans, etc) also match? Can you calculate probabilities if you're unaware of the denominator in the function (what's the population of items)? Did you calculate the sample size properly for the given test? Do you have access to a sample set? If not, did you note these limitations in your report? Did these limitations inform your conclusions?
Both LEVA and the IAI have a requirement for their certified analysts to seek and complete additional training / education towards eventual recertification. This is a good thing. But, as many of us know, there are only so many training opportunities. At some point, you kind of run out of classes to take is a common refrain. Whilst this may be true for "training" (tool / discipline specific), this is so not true for education. There are a ton of classes out there to inform one's work. The problem there becomes price / availability. This price / availability problem was the primary driver behind my taking my Statistics class out of the college context and putting it on-line as micro learning. My other classes from my "curriculum in a box" concept will roll out later this year and into the next year.
So to the point of the morning's articles - yes, you do need a trained / educated analyst. Yes, that analyst needs to engage in a scientific experiment - governed both by image science as well as experimental science. Forensic science can be science, if it's conducted scientifically. Otherwise, it becomes a rhetorical exercise utilizing demonstratives to support it's unreproducible claims.
Thursday, March 28, 2019
Calculating Nominal Resolution during Content Triage
In the image / video analysis workflow, the Content Triage step examines the file's contextual information from the standpoint of "can I answer the question with this file?"
The Content Triage step necessarily involves Frame Analysis. Part of Frame Analysis considers the calculation of the Nominal Resolution of the target area - face, shirt, tattoo, license plate, and etc. In this post, we'll consider license plates (aka number plates or registration plates) as our target.
Dimensionalinfo.com notes that in the US, standard license plate dimensions are six inches by twelve inches or approximately one hundred and fifty-two millimeters by three hundred and five millimeters.
In the United Kingdom for instance, the standard dimensions for registration plates are five hundred and twenty millimeters by one hundred and eleven millimeters for the front plates while the rear plates measure can be of the same dimensions or two hundred and eighty-five millimeters by two hundred and three millimeters.
Australia on the other hand, has standard dimensions of three hundred and seventy-two millimeters by one hundred and thirty-five millimeters or approximately fourteen and one-half inches by five inches.
The SWGDE Digital & Multimedia Evidence Glossary, Version: 3.0 (June 23, 2016), defines Nominal Resolution as "the numerical value of pixels per inch as opposed to the achievable resolution of the imaging device." "In the case of digital cameras, this refers to the number of pixels of the camera sensor divided by the corresponding vertical and horizontal dimension of the area photographed."
Let's put this all together with an example from Australia. The question / request is: can we resolve the registration plate's characters on the white care, upper center left of the image shown below.
Ordinarily, you might be tempted to just say not, it's not possible. It's too small. Which of those few pixels do you want me to Photoshop into a registration plate? In this case, we'll attempt to quantify the value of the Nominal Resolution of the target area.
If the typical Australian registration plate is 372mm wide, and the target area is 8px wide, then the Nominal Resolution is ~46.5mm per pixel - meaning each pixel covers a width on target of ~46.5mm (about 1.8 inches). How many pixels wide are needed to resolve characters in a registration plate? My tests have shown results in a few as 5-6 columns of pixels wide can work at distances to target of under 15' for typical CCTV systems. Given that a pixel is generally thought of as the smallest single component of a digital image, you'll need more than a few to resolve small details in an image or video in order to answer questions of identification.
But, but, but ... the video system's specs note that the camera is HD and the recording is 1080p. The system's owner spent a lot of money on the tech. So what?!
Nominal Resolution deals with the target area, not the system's capabilities. The key is distance to target. The farther away from the camera, the lower the Nominal Resolution. The lower the Nominal Resolution, the lower the chance of successfully answering identification questions.
Thus, when responding to requests for service, it's a good idea to calculate the Nominal Resolution of the target area in order to quantify the pixel density of the region, adding the results to your conclusion. A statement such as, "unable to fulfill the request for information, re registration plate details, due to insufficient nominal resolution (~46.5mm per pixel)," is a lot more informative than "sorry, can't do it." If the available data supports no conclusion, adding the quantitative reasons for your conclusion will go a long way to supporting your determination.
If you'd like to know more about these types of topics, join me in an upcoming training session. Click here for more information.
The Content Triage step necessarily involves Frame Analysis. Part of Frame Analysis considers the calculation of the Nominal Resolution of the target area - face, shirt, tattoo, license plate, and etc. In this post, we'll consider license plates (aka number plates or registration plates) as our target.
Dimensionalinfo.com notes that in the US, standard license plate dimensions are six inches by twelve inches or approximately one hundred and fifty-two millimeters by three hundred and five millimeters.
![]() |
Image source |
Australia on the other hand, has standard dimensions of three hundred and seventy-two millimeters by one hundred and thirty-five millimeters or approximately fourteen and one-half inches by five inches.
The SWGDE Digital & Multimedia Evidence Glossary, Version: 3.0 (June 23, 2016), defines Nominal Resolution as "the numerical value of pixels per inch as opposed to the achievable resolution of the imaging device." "In the case of digital cameras, this refers to the number of pixels of the camera sensor divided by the corresponding vertical and horizontal dimension of the area photographed."
Let's put this all together with an example from Australia. The question / request is: can we resolve the registration plate's characters on the white care, upper center left of the image shown below.
Ordinarily, you might be tempted to just say not, it's not possible. It's too small. Which of those few pixels do you want me to Photoshop into a registration plate? In this case, we'll attempt to quantify the value of the Nominal Resolution of the target area.
If the typical Australian registration plate is 372mm wide, and the target area is 8px wide, then the Nominal Resolution is ~46.5mm per pixel - meaning each pixel covers a width on target of ~46.5mm (about 1.8 inches). How many pixels wide are needed to resolve characters in a registration plate? My tests have shown results in a few as 5-6 columns of pixels wide can work at distances to target of under 15' for typical CCTV systems. Given that a pixel is generally thought of as the smallest single component of a digital image, you'll need more than a few to resolve small details in an image or video in order to answer questions of identification.
But, but, but ... the video system's specs note that the camera is HD and the recording is 1080p. The system's owner spent a lot of money on the tech. So what?!
Nominal Resolution deals with the target area, not the system's capabilities. The key is distance to target. The farther away from the camera, the lower the Nominal Resolution. The lower the Nominal Resolution, the lower the chance of successfully answering identification questions.
Thus, when responding to requests for service, it's a good idea to calculate the Nominal Resolution of the target area in order to quantify the pixel density of the region, adding the results to your conclusion. A statement such as, "unable to fulfill the request for information, re registration plate details, due to insufficient nominal resolution (~46.5mm per pixel)," is a lot more informative than "sorry, can't do it." If the available data supports no conclusion, adding the quantitative reasons for your conclusion will go a long way to supporting your determination.
If you'd like to know more about these types of topics, join me in an upcoming training session. Click here for more information.
Subscribe to:
Posts (Atom)