Featured Post

Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...

Tuesday, February 25, 2020

Sample Size? Who needs an appropriate sample?

Last year, I spent a lot of time talking about statistics and the need for analysts to understand this important science. Ive written a lot about the need for appropriate samples, especially around the idea of supporting a determination of "match" or "identification."

Many in the discipline have responded essentially saying, it is what it is - we don't really need to know about these topics or incorporate these concepts in our practice.

Now comes a new study from Sophie J. Nightingale and Hany Farid, Assessing the reliability of a clothing-based forensic identification. If you've been to one of my Content Analysis classes, or one of my Advanced Processing Techniques sessions, reading the new study won't yield much new information from a conceptual standpoint. It will, however, lend a bunch of new data affirming the need for appropriate samples and methods when conducting work in the forensic sciences.

From the new study: "Our justice system relies critically on the use of forensic science. More than a decade ago, a highly critical report raised significant concerns as to the reliability of many forensic techniques. These concerns persist today. Of particular concern to us is the use of photographic pattern analysis that attempts to identify an individual from purportedly distinct features. Such techniques have been used extensively in the courts over the past half century without, in our opinion, proper validation. We propose, therefore, that a large class of these forensic techniques should be subjected to rigorous analysis to determine their efficacy and appropriateness in the identification of individuals."

The important thing about the study is that the authors collected an appropriate set of samples to conduct their analysis.

Check it out and see what I mean. Notice how the results develop from the samples collected. See how they differ from an examination of a single image. Thus, I always say, under a certain sample size, you're better off flipping a coin.

If, after reading the paper, you're interested in increasing your knowledge of statistics and experimental science, feel free to sign-up for Statistics for Forensic Analysts.

Have a great day, my friends.

Thursday, January 30, 2020

Facial Comparison

FISWG's Facial Comparison Overview describes Morphological Analysis as "a method of facial comparison in which the features of the face are described, classified, and compared.  Conclusions are based on subjective observations." ASTM's E3149 − 18, Standard Guide for Facial Image Comparison Feature List for Morphological Analysis, provides practitioners a 19-table guide of features to aid the examiner in the classification of features within images / videos of faces.

Why bring this up?

A fairly recent case in California highlights the need for practitioners to be not only aware of the consensus standards but to employ them in their work. In People v Hernandez (link), the unpublished appellate ruling describes the work of an analyst employing a novel technique which was excluded at the trial level.

"Upon review, we conclude [name omitted] proffered comparisons were based on matter of a type on which an expert may not reasonably rely, and they were speculative. The trial court acted well within its authority as a gatekeeper in essentially determining that [name omitted] was not employing the same level of intellectual rigor of an expert in the relevant field. Notably, the theories relied upon by [name omitted] were new to science as well as the law, and he did not establish that his theories had gained general acceptance in the relevant scientific community or were reliable."

Given FISWG's Facial Comparison Overview, there is a consensus on the general methods for Facial Comparison:

  • Holistic Comparison 
  • Morphological Analysis
  • Photo-anthropometry
  • Superimposition

The analyst's choice?

"Asked how he would compare the images, [name omitted]  explained he would use, in part, Euclidean geometry. He admitted this was a technique that other people did not use. Also, he used what he called Michelangelo theory — [name omitted]'s technique of taking away portions of a distorted and/or blurred digital image to reveal the true features of the person in the iPhone video still and Exhibit 8—and an unnamed and unexplained technique for looking at bad images. [name omitted] thought his margin of error was five-to-eight percent."

"On cross-examination, [name omitted] agreed he was "somewhat unique" in using Euclidean geometry in image analysis and comparison. He did not have a scientific degree or a degree in Euclidean geometry. When asked if his use of Euclidean geometry had been subjected to scientific and peer view, he stated, "Sometimes, but not in this case because it's a theorem to understand my logic. I'm not drawing lines. . . . [¶] . . . [¶] I'm using a theory. . . . I'm defending my logic with a theory in geometry[.]" On an as-needed basis, [name omitted] used a member of his staff for peer review. The prosecutor inquired if [name omitted] was aware of anyone using Euclidean geometry in the forensic analysis of photographs like him, and he replied, "By name? No."

"[name omitted] was asked if he had made any effort "to distinguish between artifacts and properties of the individuals depicted" in Exhibit 8. He replied, "No. Not in the report." He was then asked if he tried to make a distinction in his analysis. He said, "As best as . . . one could possibly do, but there's quite a bit of pixilation on that image."

In California, there's a lot of precedent on the admission of expert testimony. This case cites Sargon v USC (link) in addressing the issue of the appropriateness of the Trial Court's exclusion of the analyst's testimony and work.

"[In California,] the gatekeeper's role `is to make certain that an expert, whether basing testimony upon professional studies or personal experience, employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.' [Citation.]" (Sargon, supra, 55 Cal.4th at p. 772.)"

"Based on the foregoing, we conclude [name omitted]'s comparisons were properly excluded. His method was full of theories and assumptions, and he ignored some or all of the artifacts at different points. Simply put, his opinion was not based on matter of a type on which an expert may reasonably rely. Beyond that, because [name omitted] essentially confused artifacts for features, his opinion was speculative."

The ruling goes on to note relevant cases to support the conclusion that the exclusion was appropriate. As such, it's a good reference.

But to the point, why make up a new technique when there's plenty of guidance out there regarding photographic comparison / facial comparison? For Morphological Analysis, you can easily translate the ASTM's guide into a spreadsheet that can be used to document features and locations. Yes, all of the current methods necessarily result from subjective observations. Likewise, conclusions are based upon those observations and as such, should be adequately supported.

If you're involved in a case where photographic comparison / facial comparison is at issue, feel free to contact me regarding a review of the evidence and/or the work done previously, or by opposing counsel's analyst. It's important to note that giving one's work to "a member of [one's] staff for peer review" is not actually a "peer review," it's a technical review. If you'd like an actual peer review, contact me today.

Likewise, if you want to learn this amazing discipline, we regularly feature classes on photographic comparison in Henderson, NV. We can also bring the class to you.

Have a great day, my friends.

Thursday, January 23, 2020

What is Super Resolution?

Back in early 2017, I wrote an article for Axon in support of their now-dissolved partnership with my former employer about Super Resolution. It seems that Super Resolution is back in the news. By way of updating that post, let's revisit just what's going on with the technology and the few problems it may cause if you don't understand what's happening.

Vendor reps note that Super Resolution works at the "sub-pixel" level, and people's eyes roll. If the pixel is the smallest unit of measure, a single picture element, how can there be a "sub-pixel?" That's a very good question. Let's take a look at the answer.


From the report in Amped SRL's Five: The Super Resolution filter applies a sub-pixel registration to all the frames of a video, then merges the motion corrected frames together, along with a deblurring filtering. If a Selection is set, then the selected area will be optimized.

Ok. What is sub-pixel registration?

First, let's look at how the authors of Super-Resolution Without Explicit Subpixel Motion Estimation set up the premise: "The coefficients of this series are estimated by solving a local weighted least-squares problem, where the weights are a function of the 3-D space-time orientation in the neighborhood. As this framework is fundamentally based upon the comparison of neighboring pixels in both space and time, it implicitly contains information about the local motion of the pixels across time, therefore rendering unnecessary an explicit computation of motions of modest size. The proposed approach not only significantly widens the applicability of super-resolution methods to a broad variety of video sequences containing complex motions, but also yields improved overall performance." That's quite a mouthful.

Here's the breakdown.

The first thing we must understand is the pixel neighborhood. The neighbourhood of a pixel is the collection of pixels which surround it. The connected pixels are neighbors to every pixel that touches one of their corners.


Next, we must understand what registration means. Image registration is the process of aligning two or more images of the same scene. This process involves designating one image as the reference (also called the reference image or the fixed image), and applying geometric transformations to the other images so that they align with the reference.

Let's put it together. A static pixel (P) in a single image is easy to understand. But, what about video? That pixel represents some place in 4D space-time. That 4D space-time orientation will change as time progresses. We want to line-up (register) that pixel across the multiple frames. Super Resolution thus tracks implicit information about the motion of the pixel across 4D space-time, and corrects for that motion. The result of the process is a single higher-resolution image.

The practical implications are this:


  • Frame Averaging works well when the object of interest doesn't move. The frames are averaged and the things that are different across frames are removed and the things that are the same remain.
  • To help with a Frame Averaging exercise, we can use a perspective registration process to align the item of interest - a license plate for example - across frames. This works well when the item has moved to an entirely new location, like in low frame rate video.

But, when the motion is subtle, super resolution is a better choice.

Here's an example. The park service was investigating a vandalism and poaching incident. There's a video that they believe was taken in the area of the incident. Within the video, there's a sign in the background that contains location information (text) that's blurred by the motion of the shaking, hand-held camera. There's enough motion to eliminate Frame Average as a processing choice. There's not enough motion to use a perspective registration function to align the sign correctly. Super resolution is the best choice to correct for the motion blur and register the pixels that make the text of the sign.

In this case, super resolution was indeed the best choice. The sign's information was revealed and the location was determined.

And now the potential pitfalls ...

  • Brand new pixels and pixel neighborhoods are created in this process.
  • A brand new piece of evidence (demonstrative) is created in this process.
Whenever you perform a perspective registration, your geometric transform necessarily creates new pixels and neighborhoods. In FIVE, during the process of using the filters, the creation is "virtual" in that it all happens in CPU and RAM. These new pixels and neighborhoods are really only created when you write the results of your processing out as a new file.

That brand new piece of evidence that you've created - the results written out - is a demonstrative that you've just created. You must explain it's relationship to the actual evidence files and how it came to be. Indeed, you've just added a new file to the case. This fact should be disclosed in your report.

With the reports in FIVE, there is the plain English statement about the process that is lifted from the many academic papers from which Amped SRL gets their filters. Sure, when you're asked about the process  performed, you can likely just read the report's description. But, what if the Trier of Fact wants to know more? How confident are you that you can explain super resolution?

Consider super resolution's main use - license plate enhancement. Your derivative file is a demonstrative in support on one side's theory of the case. Your derivative is illustrative of your opinion. Did you use the tool correctly? Are the results accurate? Is seeing believing? Given the ultra low resolutions we're usually dealing with, a slight shift in pixels can make a big difference in rendering alpha-numeric characters. This is part of the reason Amped SRL likes to use European license plates in their classes and PR - they're easy to fix. Not so in the US.

Advice like that shown above is the value of independence. A manufacturer's rep can really only show you the features. I'll show you not only how a tool works, but how to use it in different contexts, why it's sometimes inappropriate to use, and how to frame it's use during testimony. If you're interested in diving deep into the discipline of video forensics, I invite you to an upcoming course. See our offerings on our web site.

Have a great day, my friends.

Tuesday, January 7, 2020

The FTC vs Axon? Axon vs the FCT? Wow!

Whilst we were all minding our own business, it seems that the US Federal Trade Commission was busy investigating Axon for anti-competitive behavior. Last Friday, Axon CEO, Rick Smith, penned a piece on LinkedIn to make his case to the public. According to Smith, the FTC believes that Axon's acquisition of VieVu in 2018 was anti-competitive.

I'm not a fly on the wall. I only know what I've read in Smith's post and the subsequent reporting and interviews. To be fair, the press has let Smith tell Axon's side of the story. For the government's side, we have only a press release on the FTC's web site.

In terms of disclosure, it should be noted that within the scope of my prior employment with Amped Software, Inc. (an Axon partner), I worked closely with several internal business units within Axon.

But, I want to break down the FTC's press release to attempt to determine what's really the problem here.

Paragraph 1: "The Federal Trade Commission has issued an administrative complaint (a public version of which will be available and linked to this news release as soon as possible) challenging Axon Enterprise, Inc.’s consummated acquisition of its body-worn camera systems competitor VieVu, LLC. Before the acquisition, the two companies competed to provide body-worn camera systems to large, metropolitan police departments across the United States."

Analysis: Yes, they were in the same market. But, given the many quality issues with VieVu's product line, they weren't really competing - in the same way that the best Texas high school football team is really no competition for the worst of the NFL in any given year. My opinion on how VieVu got "competitive" on deals was to compete on price, not on quality. It was their low price that got them in the door at police departments, but it was their lack of quality that ruined the company.

Paragraph 2: "According to the complaint, Axon’s May 2018 acquisition reduced competition in an already concentrated market. Before their merger, Axon and VieVu competed to sell body-worn camera systems that were particularly well suited for large metropolitan police departments. Competition between Axon and VieVu resulted in substantially lower prices for large metropolitan police departments, the complaint states. Axon and VieVu also competed vigorously on non-price aspects of body-worn camera systems. By eliminating direct and substantial competition in price and innovation between dominant supplier Axon and its closest competitor, VieVu, to serve large metropolitan police departments, the merger removed VieVu as a bidder for new contracts and allowed Axon to impose substantial price increases, according to the complaint."

Analysis: Given the analysis of the first paragraph, VieVu was certainly not "particularly well suited" to deliver on any department's needs. Additionally, it wasn't the "competition" that drove prices down, it was VieVu's essentially offering their goods below cost to get in the door that drove prices down. Selling below cost isn't sustainable, and police agencies must look at all factors of a vendor - like the fact that unsustainable business practices will likely mean that the company won't be around throughout the lifecycle of the product.

Paragraph 3: “Competition not only keeps prices down, but it drives innovation that makes products better,” said Ian Conner, Director of the FTC’s Bureau of Competition. “Here, the stakes could not be higher. The Commission is taking action to ensure that police officers have access to the cutting-edge products they need to do their job, and police departments benefit from the lower prices and innovative products that competition had provided before the acquisition.”

Analysis: the market is still chocked full of offerings. There's Motorola/Watchguard, Panasonic, Getac, Utility, Coban, Visual Labs/Samsung, L3/Mobile Vision, and Digital Ally, plus over 10k results from China on alibaba.com. You can get a body camera from China's LS Vision for under $100/unit. That's a lot of competition.

Paragraph 4: "The complaint also states that as part of the merger agreement, Axon entered into several long-term ancillary agreements with VieVu’s former parent company, Safariland, that also substantially lessened actual and potential competition. These agreements barred Safariland from competing with Axon now and in the future on all of Axon’s products, limited solicitation of customers and employees by either company, and stifled potential innovation or expansion by Safariland. These restraints, some of which were intended to last more than a decade, are not reasonably limited to protect a legitimate business interest, according to the complaint."

Analysis: This part is just silly. Axon says to Safariland, known for their holsters and gear, stay with what you're good at (holsters and gear) and we'll stay with what we're good at. Stay out of our lane, and we'll stay out of yours. This is a good business decision, not anti-competitive behavior. You also have to be myopic to not consider that Safariland only bought VieVu in 2015. According to the WSJ, "Vievu LLC, a maker of police body cameras, has been acquired by Safariland LLC, which is bulking up its portfolio of security products ahead of a planned initial public offering next year." Safariland's entry into other vertical markets followed a similar pattern. But, at their heart, they're a holster and gear company, so their exit from the technology sector is no great loss.

Paragraph 5: "The Commission vote to issue the administrative complaint was 5-0. The administrative trial is scheduled to begin on May 19, 2020."

Analysis: What is missing is a specific citation as to which federal laws were violated. Likely, there was no specific violation of US law, but rather a violation of an FTC Rule. The FTC has the authority to pass and enforce it's own rules outside of the normal US law making process. Smith outlines the administrative hearing process in his op-ed. Smith is correct, this won't see a "court room," as the vast majority of FTC processes are kept in-house.

An examination of the FTC's "Competition Enforcement Database" found only 25 competition enforcement actions for 2018, which was down from 2017's 32 actions. Given the totality of commercial activity in the US, this is an incredibly small number. The assumption is that they only go after the most egregious of behaviors. If that's the case, what's really behind this action against Axon. VieVue was delivery faulty products. It was losing deals on it's own. Axon did a mutually beneficial deal with Safariland to take VieVu off their hands. What's actually wrong with this? Does this rise to a Standard Oil or AT&T level? Hardly. So why this case? That's the problem with administrative processes, we'll never know. There's a complete lack of transparency into their decision making or procedures.

I do tend to agree with Smith that this issue rises above brands and technology. It's a peek into the workings of the Administrative State in the US. What remains to be seen is if the US government grants Axon permission to sue the FTC. Stay tuned.

Monday, December 30, 2019

An Amped FIVE UX tip to wrap up the year

The recent update to Amped SRL's flagship tool, FIVE, brought some UX headaches for many US-based users. You see, the redesigned reporting function does a something new and unexpected. Let's take a look, and offer a couple of work-arounds.

Let's say you're used to your Evidence.com work flow, the one where all your evidence goes into a single folder for upload, and you're processing files for a multi-defendant case. If there are files featuring only one of the defendants, which happens often, you'll want to have separate project (.afp) files for each evidence item. This will make tagging easier. This will make discovery easier. This will make the new reporting functionality become non-functional.

You see, the new reporting feature doesn't just create a report, it creates a folder to hold the report and the Amped SRL logos - and calls the folder "report." That's fine for the first file that you process. But the next project? Well, when you go to generate the report, FIVE will see that there's a "report" folder there already. What does it do? Does it prompt you to say, "hey! there's already a folder with that name. What do you want to do?" Of course not. Not expecting a new reporting behavior, and a complete lack of documentation of this new reporting format, you'll just keep processing away. At the end of your work, there's only one folder and only one report file.

The work-around on your desktop is to put each evidence item into it's own folder, within the case folder. It's an extra step, I know. You'll also have to modify the "inbox" that E.com is looking for.

The other weird issue is that FIVE now drops some logos and a banner as loose files in the report folder. I'm sure that this is due to FIVE's processing of the report - first to HTML - then to PDF via some freeware. One would think that in choosing PDF you wouldn't receive the side-car files, but you would be thinking wrong. Again, this has to do with the way the bit of freeware processes the report.

As an interesting aside, in Daubert Hearing, I actually got a cross examination question that hinted at Amped SRL being pirates of software. Is anything in the product an original creation or is it just pieced together bits and such? But, I digress.

Remember, in the US, anything created in the process of analysis should be preserved and disclosed. One customer complained that it seems as though Amped SRL is throwing an extra business card in on the case file. I don't know about that. But, it does seem a bit odd for a forensic science tool to behave in such a way.

You can always revert to the previous version if you want to save time and preserve your sanity. This new version doesn't add much for the analyst anyway. You can easily install previous versions. It takes only a few minutes each time.

As with anything forensic science, always validate new releases of your favorite tools prior to deploying them on casework. If you're looking for an independent set of eyes, and would like to outsource your tool validation, we're here to help.

Have a great new year my friends.