Featured Post

Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...

Tuesday, September 22, 2020

So long, and thanks for all the fish

 This will be the final post in this space. I've retired from the practice. I'll leave this free resource up as an artifact and a reference. 

So long, and thanks for all the fish.

Friday, March 27, 2020

A cultural shift in training delivery is happening

First of all, I hope this post finds you and yours in good health. I hope that you have enough to eat and have enough resources to meet your basic needs. I know that many folks have been sent home to work, some have even lost their jobs (some temporarily, some permanently). With courts closing and pushing trial calendars out, most of the legal support world is on hold. It's rough out there. I get it. I'm living it too.

I've had ample time to get caught up on projects, write papers, and fulfill my continuing education requirements. I switched to on-line learning for my own continuing education a while back, going 100% on-line about two years ago. As a consequence, I've seen the good, the bad, and the ugly of training and education offerings.

With the current crisis, vendors in the forensic science space are stuck. The learner population has, for generations, operated under the belief that they need to be physically present in the room with the instructor and their fellow students. Providers have reinforced this by only offering in-person training. Some occasionally ventured on-line, but the learners weren't there; it wasn't profitable so it wasn't offered again.

Now, travel is restricted. People have been sent home. Without all the usual economic activities, municipalities are seeing gigantic holes in their budget projections. The state and local agencies are appealing to the federal government for help. As of today, that help is still being debated in Congress. What it will look like, what will be prioritized, is yet to be seen. But, if government agencies' reaction to this crisis mirrors that of the 2007-2010 financial crisis, personnel will face pay cuts and only essential functions will be funded. Last time, training was not deemed essential. I'm guessing that will be the case again.

The vendors understand this too. Most have announced changes in their licensing terms as well as updates to their training schedules, moving training on-line. But, it's not that easy to shift paradigms. Organizing and delivering an in-person training session is entirely different that organizing and delivering an on-line training session. I should know, I've done both.

Some vendors are offering "webinar" based training. With these, you log into a portal like Zoom, and you watch as the instructor leads you through a "broadcast" version of the usual in-person training. You might see a split screen with their talking head and their computer's screen. But, Zoom users are facing throttling issues as so many are now working from home. Zoom, and it's competitors, have offered so many "free" accounts that they're now getting swamped. A few instructors are using these "free" accounts to host these "webinars." It's not going well. Added to this is the dispersed nature of the learner population. A 9-5 class, hosted in New Jersey, means a west coast USA learner must be up and ready to learn at 6am. Not optimal. I faced this issue learning SalvationData's VIP. Given their instructor was in central China, the course ran from 6-10pm, then continued in the early morning after I got a rather brief nap. It was crazy.

The other problem is price. If a vendor is utilizing a free or low cost service to host a webinar, do you expect to pay the same price as an in-person training session where you can not only get hands-on help from the instructor, but you can interact with your fellow learners? I should think not. Yet, many vendors do not discount their on-line offerings, or offer minimal discounts. They're counting on the fact that it's often the case that the learner is not spending their own money, but drawing from training funds at their agency. They're not price conscious because they don't have to be. Now, with the current budget uncertainty, learners must be very price conscious if they're going to get their training requests approved.

I share the aspirational goals of my country's leaders in that I hope to see the country back to work by Easter. It's aspirational - a best case scenario. Given the hit that the economy of the world has taken, I don't think the old training model will ever see the light of day again. Vendors must face the reality of restricted funds and restricted travel.

Back when I was working with Amped Software, Inc., and with Axon, many of us saw this problem coming. We saw the mentality of treating training like an extended vacation as unsustainable. Training staff can not be everywhere at once. Staffing costs are quite expensive, as is travel to the training location. In an economic crisis, the first thing to get eliminated was training. It happened before. It would happen again - and it has. I spent about a year researching the best options for moving on-line, eventually arriving at LearnUpon as our LMS provider about time the deal between Amped and Axon collapsed. When Apex was born from this, we were ready to move our offerings on-line. First out of the gate was Statistics for Forensic Analysts, a fully validated course delivered on-line as micro learning. Along the way to that, I earned a Masters in Education - Instructional Design.

All of my in-person courses have been totally revamped and assembled as micro learning offerings. The design and delivery isn't negatively affected by bandwidth problems. Our more popular introductory courses are currently available and have an active learner populations. They're steeply discounted versus in-person offerings. We don't use "free" webinar services, we have invested in a state of the art LMS, which does cost us a bit of money each year. Our more advanced courses will be available soon, now that I have more time to focus on the deliverables. Deliverables? Yes, our Photogrammetry class, for example, includes a complete set of instructions for going to your local building supply store in order to get the materials you'll need to build a reference rig for creating reverse projection recordings. It also includes the printer's template for creating your own resolution / height charts. With the current travel restrictions, as well as business' focus on the current crisis, it's not responsible to require learners to venture out to do these things. Thus, we'll wait until the world recovers.

Yes, we're obviously not offering "official" "vendor approved" courses. But, in fairness, how many vendors have instructional designers on staff? How many have formally validated their on-line courses as fulfilling their stated learning goals? How many have years of experience in the on-line space and utilize the best tools to deliver their training? To that end, how many actually understand how to create learning goals and outcomes for on-line learning events, then deliver upon those goals? Given the amount of grief I'm witnessing - head over to LinkedIn to see the stories - I'm guessing that learners are experiencing pretty terrible courses. Just because one is a subject matter expert, or a good in-person instructor, does not mean that one will be successful delivering upon learning goals in the asynchronous e-learning space.

Once you get settled and are looking for something to distract you from the fact that you've run through the entirety of the Netflix catalog, we're here for you with some amazing learning opportunities. You can sign up and start learning any time, no need to wait or coordinate schedules in order to attend a "webinar" hosted in some distant location. Our learning opportunities fit your schedule, that's the beauty of micro learning.

Have hope, my friends. Make aspirational goals. We'll get through this. Let's utilize the downtime to create good habits of self-care and self-improvement. Let's not waste our most precious resource, time. Have a great day.

Wednesday, March 25, 2020

Working from home? Remember, Alexa hears everything.

As firms and agencies urge their employees to work from home during the global pandemic, their employees’ confidential phone calls run the risk of being heard by Amazon.com Inc. and Google.

Mishcon de Reya LLP, the U.K. law firm that famously advised Princess Diana on her divorce and also does corporate law, issued advice to staff to mute or shut off listening devices like Amazon’s Alexa or Google’s voice assistant when they talk about client matters at home, according to a partner at the firm. It suggested not to have any of the devices near their work space at all.

Mishcon’s warning covers any kind of visual or voice enabled device, like Amazon and Google’s speakers. But video products such as Ring, which is also owned by Amazon, and even baby monitors and closed-circuit TVs, are also a concern, said Mishcon de Reya partner Joe Hancock, who also heads the firm’s cybersecurity efforts.

“Perhaps we’re being slightly paranoid but we need to have a lot of trust in these organizations and these devices,” Hancock said. “We’d rather not take those risks.”

The firm worries about the devices being compromised, less so with name-brand products like Alexa, but more so for a cheap knock-off devices, he added.

Like Wall Street, law firms are facing challenges trying to impose secure work-from-home arrangements for certain job functions. Critical documents, including those that might be privileged, need to be secured. Meanwhile in banking, some traders are being asked to work at alternative locations that banks keep on standby for disaster recovery instead of makeshift work-from-home stations to maintain confidentiality.

Smart speakers, already notorious for activating in error, making unintended purchases or sending snippets of audio to Amazon or Google, have become a new source of risk for businesses. As of last year, the U.S. installed base of smart speaker devices was 76 million units and growing, according to a Consumer Intelligence Research Partners report.

Amazon and Google say their devices are designed to record and store audio only after they detect a word to wake them up. The companies say such instances are rare, but recent testing by Northeastern University and Imperial College London found that the devices can activate inadvertently between 1.5 and 19 times a day.

Tech companies have been under fire for compromising users privacy by having teams of human auditors listen to conversations without consent to improve their AI algorithms. Google has since said that users have to opt-in to let the tech giant keep any voice recordings made by the device. Amazon now lets its users set up automatic deletion of recordings, and opt out of manual review.

The law firm’s warning first surfaced on an Instagram account “justthequant,” where people share their intel and struggles of working from home.

This story was originally published on Bloomberg.com. Read it here.

Wednesday, March 11, 2020

Definitions

When creating case reports, I like to use the terms from our discipline as defined in the various standards documents. Here are some of the most popular terms, and their definitions. Saving these here for quick reference.

DEFINITIONS

The following definitions for terms used herein are taken from the SWGDE Digital & Multimedia Evidence Glossary, Version 3.0 (June 23, 2016):

  • Compression - The process of reducing the size of a data file. (See also, “Lossy Compression” and “Lossless Compression”.) 
  • Compression Ratio - The size of a data file before compression divided by the file size after compression.
  • Forensic Photogrammetry - The process of obtaining dimensional information regarding objects and people depicted in an image for legal applications. 
  • Image Analysis - The application of image science and domain expertise to examine and interpret the content of an image, the image itself, or both in legal matters. 
  • Image Comparison (Photographic Comparison) - The process of comparing images of questioned objects or persons to known objects or persons or images thereof, and making an assessment of the correspondence between features in these images for rendering an opinion regarding identification or elimination.
  • Image Content Analysis - The drawing of conclusions about an image. Targets for content analysis include, but are not limited to: the subjects/objects within an image; the conditions under which, or the process by which, the image was captured or created; the physical aspects of the scene (e.g., lighting or composition); and/or the provenance of the image.
  • Image Enhancement - Any process intended to improve the visual appearance of an image or specific features within an image. 
  • Multimedia Evidence - Analog or digital media, including, but not limited to, film, tape, magnetic and optical media, and/or the information contained therein.
  • Native File Format - The original form of a file. A file created with one application can often be read by others, but a file’s native format remains the format it was given by the application that created it. In most cases the specific attributes of a file (for example, fonts in a document) can only be changed when it is opened with the program that created it. [Newton’s Telecom Dictionary] 
  • Nominal Resolution - The numerical value of pixels per inch as opposed to the achievable resolution of the imaging device. In the case of flatbed scanners, it is based on the resolution setting in the software controlling the scanner. In the case of digital cameras, this refers to the number of pixels of the camera sensor divided by the corresponding vertical and horizontal dimension of the area photographed. 
  • Photogrammetry - The art, science, and technology of obtaining reliable information about physical objects and the environment through the processes of recording, measuring, and interpreting photographic images and patterns of electromagnetic radiant energy and other phenomena. [The Manual of Photogrammetry, 4th Edition, 1980, ASPRS] In forensic applications, Photogrammetry, sometimes called “mensuration,” most commonly is used to extract dimensional information from images, such as the height of subjects depicted in surveillance images and accident scene reconstruction. Other forensic photogrammetric applications include visibility and spectral analyses. When applied to video, this is sometimes referred to as “videogrammetry.” 
  • Pixel - Picture element, the smallest component of a picture that can be individually processed in an electronic imaging system [The Focal Encyclopedia of Photography, 4th Edition 2007].
  • Proprietary File Format - Any file format that is unique to a specific manufacturer or product.
  • Quantitative Image Analysis - The process used to extract measurable data from an image. 
  • Validation - The process of performing a set of experiments, which establishes the efficacy and reliability of a tool, technique or procedure or modification thereof. 
  • Video Analysis - The scientific examination, comparison, and/or evaluation of video in legal matters.
  • Video Enhancement - Any process intended to improve the visual appearance of video sequences or specific features within video sequences.

The following definitions for terms used herein are taken from the SWGDE Best Practices for Photographic Comparison for All Disciplines, Version 1.1 (July 18, 2017):

  • Class Characteristic – A feature of an object that is common to a group of objects.
  • Individualizing Characteristic – A feature of an object that contributes to differentiating that object from others of its class.

Tuesday, March 10, 2020

Considering forensic science: individual differences, opposing expert testimony and juror decision making

In criminal and civil trials around the world, both sides will often retain experts in various forensic science fields to analyze evidence and present their findings to the jury. In a fair process, and employing science, it's hoped that two similarly trained and equipped will arrive at the same place in terms of conclusions. But, this is often not the case.

When experts disagree, judges (acting as gatekeepers) will often allow both sides to present their witnesses and their evidence, relying upon juries (as finders of facts) to decide on the truth of the matter. One is left to wonder, how reliable are juries in accurately engaging in this essential task?

A fascinating study was published in 2018 that seeks to address this issue. In Considering forensic science: individual differences, opposing expert testimony and juror decision making (link), the authors seek to answer this question.

Abstract: "Two experimental studies examined the effect of opposing expert testimony on perceptions of the reliability of unvalidated forensic evidence (anthropometric facial comparison). In the first study argument skill and epistemological sophistication were included as measures of individual differences, whereas study two included scores on the Forensic Evidence Evaluation Bias Scale. In both studies participants were assigned to groups who heard: (1) no expert testimony, (2) prosecution expert testimony, or (3) prosecution and opposing expert testimony. Opposing expert testimony affected verdict choice, but this effect was mediated by perceptions of reliability of the initial forensic expert's method. There was no evidence for an effect on verdict or reliability ratings by argument skill or epistemology. In the second experiment, the same mediation effect was found, however scores on one subscale from the FEEBS and age also affected both verdict and methodological reliability. It was concluded that opposing expert testimony may inform jurors, but perceptions of the reliability of forensic evidence affect verdict, and age and bias towards forensic science influence perceptions of forensic evidence. Future research should investigate individual differences that may affect perception or bias towards forensic sciences under varying conditions of scientific reliability."

A fascinating and informative read.

Enjoy your day, my friends.

Monday, March 9, 2020

COVID-19 and the cancellation of everything

I received word today that the IWCE 2020 conference in Las Vegas has been postponed. I was scheduled to speak at the conference, so I suddenly a few more free days on my calendar.

Having recently travelled cross country, twice, I can tell you that no one is quite sure what to do or how to act. I think US companies and agencies are acting with an abundance of caution thus far. Meanwhile, according to CNBC, Italy has placed itself on lockdown.

Practice good hygiene. Get plenty of rest. Eat right. Breathe. Remember, we got through SARS, Swine Flu, and Bird Flu in the recent past. We'll get through this.

BTW, if you're looking for something to do whilst self-quarantining, Amped released a new version of FIVE today. Three cheers: along with some UI / UX improvements, audio redaction has finally arrived. Hip hip hooray!

Stay safe, my friends.

Tuesday, March 3, 2020

How accurate are satellite images?

Amped SRL recently announced an update to Authenticate which includes an integration with SunCalc.org. It's a pretty cool free on-line tool to check for sun positioning relative to time and place. Kudos to Amped for adding the integration and speeding up the workflow for this task.

The discussion of this new integration on Amped's blog notes that this integration can be used to check theories of the case relative to statements made and either supported or refuted by images taken of the scene.

As a technical investigator and analyst, this is a cool new integration that speeds up what many were doing anyway - including myself occasionally.

An important caveat about SunCalc.org is necessary, however. They utilize ESRI maps when showing the satellite view overlay (ESRI Satellite).

Available Base Maps at SunCalc.org 

Ordinarily, this wouldn't be a problem. However, if you head over to the ESRI web site and ask the question, how frequently are the world imagery base maps updated, you might be surprised at the answer.

Answer:

"The World Imagery basemap is not collectively updated. Rather, on occasion, updates occur on the different images within the basemap, and there is no actual known cycle for this activity. 

The basemap is made up of several imagery tiles. In ArcMap, the Identify tool can be used to find out the date when a specific tile of interest was updated. 

Additional information is available in the Related Information section below, including metadata for the basemap and a list of all Community Maps Program contributors."

Yes, in ArcMap, you can use the Identify tool to determine the age of the tile. But, you're not using ArcMap, you're using SunCalc. There's really no way within the SunCalc interface to determine the age of the tile you happen to be viewing.


As a way of illustrating the point, this view contains a feature on the property of interest that hasn't been physically present for some time. Thus, this image tile is at least several months old. With this in mind, you'll want to be extremely careful in making judgements about the scene when using SunCalc.

Don't get me wrong. I like SunCalc. But, in the fast-changing western US, one has to be sure that the features in question are actually present in the view seen on SunCalc's page.

Have a good day, my friends.

Saturday, February 29, 2020

A D.C. judge issues a much-needed opinion on ‘junk science'

Radley Balko is at it again. This time, the focus of his attention is a ruling is tool-mark analysis.

"This brings me to the September D.C. opinion of United States v. Marquette Tibbs, written by Associate Judge Todd E. Edelman. In this case, the prosecution wanted to put on a witness who would testify that the markings on a shell casing matched those of a gun discarded by a man who had been charged with murder. The witness planned to testify that after examining the marks on a casing under a microscope and comparing it with marks on casings fired by the gun in a lab, the shell casing was a match to the gun.

This sort of testimony has been allowed in thousands of cases in courtrooms all over the country. But this type of analysis is not science. It’s highly subjective. There is no way to calculate a margin for error. It involves little more than looking at the markings on one casing, comparing them with the markings on another and determining whether they’re a “match.” Like other fields of “pattern matching” analysis, such as bite-mark, tire-tread or carpet-fiber analysis, there are no statistics that analysts can produce to back up their testimony. We simply don’t know how many other guns could have created similar markings. Instead, the jury is simply asked to rely on the witness’s expertise about a match."

As noted in the previous post, the "pattern matching" comparisons are prone to error when an appropriate sample size is not used as a control.

The issue, as far as statistics are concerned, is not necessarily the observations of the analyst but the conclusions. Without an appropriate sample, how does one know where the observed results would fall within a normal distribution? Are the results one is observing "typical" or "unique?" How you would know? You would construct a valid test.

Balko's point? No one seems to be doing this - conducting valid tests. Well, almost no one. I certainly do - conduct valid tests, that is.

If you're interested in what I'm talking about and want to learn more about calculating sample sizes and comparing observed results, sign up today for Statistics for Forensic Analysts (link).

Have a great weekend, my friends.

Tuesday, February 25, 2020

Sample Size? Who needs an appropriate sample?

Last year, I spent a lot of time talking about statistics and the need for analysts to understand this important science. Ive written a lot about the need for appropriate samples, especially around the idea of supporting a determination of "match" or "identification."

Many in the discipline have responded essentially saying, it is what it is - we don't really need to know about these topics or incorporate these concepts in our practice.

Now comes a new study from Sophie J. Nightingale and Hany Farid, Assessing the reliability of a clothing-based forensic identification. If you've been to one of my Content Analysis classes, or one of my Advanced Processing Techniques sessions, reading the new study won't yield much new information from a conceptual standpoint. It will, however, lend a bunch of new data affirming the need for appropriate samples and methods when conducting work in the forensic sciences.

From the new study: "Our justice system relies critically on the use of forensic science. More than a decade ago, a highly critical report raised significant concerns as to the reliability of many forensic techniques. These concerns persist today. Of particular concern to us is the use of photographic pattern analysis that attempts to identify an individual from purportedly distinct features. Such techniques have been used extensively in the courts over the past half century without, in our opinion, proper validation. We propose, therefore, that a large class of these forensic techniques should be subjected to rigorous analysis to determine their efficacy and appropriateness in the identification of individuals."

The important thing about the study is that the authors collected an appropriate set of samples to conduct their analysis.

Check it out and see what I mean. Notice how the results develop from the samples collected. See how they differ from an examination of a single image. Thus, I always say, under a certain sample size, you're better off flipping a coin.

If, after reading the paper, you're interested in increasing your knowledge of statistics and experimental science, feel free to sign-up for Statistics for Forensic Analysts.

Have a great day, my friends.

Thursday, January 30, 2020

Facial Comparison

FISWG's Facial Comparison Overview describes Morphological Analysis as "a method of facial comparison in which the features of the face are described, classified, and compared.  Conclusions are based on subjective observations." ASTM's E3149 − 18, Standard Guide for Facial Image Comparison Feature List for Morphological Analysis, provides practitioners a 19-table guide of features to aid the examiner in the classification of features within images / videos of faces.

Why bring this up?

A fairly recent case in California highlights the need for practitioners to be not only aware of the consensus standards but to employ them in their work. In People v Hernandez (link), the unpublished appellate ruling describes the work of an analyst employing a novel technique which was excluded at the trial level.

"Upon review, we conclude [name omitted] proffered comparisons were based on matter of a type on which an expert may not reasonably rely, and they were speculative. The trial court acted well within its authority as a gatekeeper in essentially determining that [name omitted] was not employing the same level of intellectual rigor of an expert in the relevant field. Notably, the theories relied upon by [name omitted] were new to science as well as the law, and he did not establish that his theories had gained general acceptance in the relevant scientific community or were reliable."

Given FISWG's Facial Comparison Overview, there is a consensus on the general methods for Facial Comparison:

  • Holistic Comparison 
  • Morphological Analysis
  • Photo-anthropometry
  • Superimposition

The analyst's choice?

"Asked how he would compare the images, [name omitted]  explained he would use, in part, Euclidean geometry. He admitted this was a technique that other people did not use. Also, he used what he called Michelangelo theory — [name omitted]'s technique of taking away portions of a distorted and/or blurred digital image to reveal the true features of the person in the iPhone video still and Exhibit 8—and an unnamed and unexplained technique for looking at bad images. [name omitted] thought his margin of error was five-to-eight percent."

"On cross-examination, [name omitted] agreed he was "somewhat unique" in using Euclidean geometry in image analysis and comparison. He did not have a scientific degree or a degree in Euclidean geometry. When asked if his use of Euclidean geometry had been subjected to scientific and peer view, he stated, "Sometimes, but not in this case because it's a theorem to understand my logic. I'm not drawing lines. . . . [¶] . . . [¶] I'm using a theory. . . . I'm defending my logic with a theory in geometry[.]" On an as-needed basis, [name omitted] used a member of his staff for peer review. The prosecutor inquired if [name omitted] was aware of anyone using Euclidean geometry in the forensic analysis of photographs like him, and he replied, "By name? No."

"[name omitted] was asked if he had made any effort "to distinguish between artifacts and properties of the individuals depicted" in Exhibit 8. He replied, "No. Not in the report." He was then asked if he tried to make a distinction in his analysis. He said, "As best as . . . one could possibly do, but there's quite a bit of pixilation on that image."

In California, there's a lot of precedent on the admission of expert testimony. This case cites Sargon v USC (link) in addressing the issue of the appropriateness of the Trial Court's exclusion of the analyst's testimony and work.

"[In California,] the gatekeeper's role `is to make certain that an expert, whether basing testimony upon professional studies or personal experience, employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.' [Citation.]" (Sargon, supra, 55 Cal.4th at p. 772.)"

"Based on the foregoing, we conclude [name omitted]'s comparisons were properly excluded. His method was full of theories and assumptions, and he ignored some or all of the artifacts at different points. Simply put, his opinion was not based on matter of a type on which an expert may reasonably rely. Beyond that, because [name omitted] essentially confused artifacts for features, his opinion was speculative."

The ruling goes on to note relevant cases to support the conclusion that the exclusion was appropriate. As such, it's a good reference.

But to the point, why make up a new technique when there's plenty of guidance out there regarding photographic comparison / facial comparison? For Morphological Analysis, you can easily translate the ASTM's guide into a spreadsheet that can be used to document features and locations. Yes, all of the current methods necessarily result from subjective observations. Likewise, conclusions are based upon those observations and as such, should be adequately supported.

If you're involved in a case where photographic comparison / facial comparison is at issue, feel free to contact me regarding a review of the evidence and/or the work done previously, or by opposing counsel's analyst. It's important to note that giving one's work to "a member of [one's] staff for peer review" is not actually a "peer review," it's a technical review. If you'd like an actual peer review, contact me today.

Likewise, if you want to learn this amazing discipline, we regularly feature classes on photographic comparison in Henderson, NV. We can also bring the class to you.

Have a great day, my friends.

Thursday, January 23, 2020

What is Super Resolution?

Back in early 2017, I wrote an article for Axon in support of their now-dissolved partnership with my former employer about Super Resolution. It seems that Super Resolution is back in the news. By way of updating that post, let's revisit just what's going on with the technology and the few problems it may cause if you don't understand what's happening.

Vendor reps note that Super Resolution works at the "sub-pixel" level, and people's eyes roll. If the pixel is the smallest unit of measure, a single picture element, how can there be a "sub-pixel?" That's a very good question. Let's take a look at the answer.


From the report in Amped SRL's Five: The Super Resolution filter applies a sub-pixel registration to all the frames of a video, then merges the motion corrected frames together, along with a deblurring filtering. If a Selection is set, then the selected area will be optimized.

Ok. What is sub-pixel registration?

First, let's look at how the authors of Super-Resolution Without Explicit Subpixel Motion Estimation set up the premise: "The coefficients of this series are estimated by solving a local weighted least-squares problem, where the weights are a function of the 3-D space-time orientation in the neighborhood. As this framework is fundamentally based upon the comparison of neighboring pixels in both space and time, it implicitly contains information about the local motion of the pixels across time, therefore rendering unnecessary an explicit computation of motions of modest size. The proposed approach not only significantly widens the applicability of super-resolution methods to a broad variety of video sequences containing complex motions, but also yields improved overall performance." That's quite a mouthful.

Here's the breakdown.

The first thing we must understand is the pixel neighborhood. The neighbourhood of a pixel is the collection of pixels which surround it. The connected pixels are neighbors to every pixel that touches one of their corners.


Next, we must understand what registration means. Image registration is the process of aligning two or more images of the same scene. This process involves designating one image as the reference (also called the reference image or the fixed image), and applying geometric transformations to the other images so that they align with the reference.

Let's put it together. A static pixel (P) in a single image is easy to understand. But, what about video? That pixel represents some place in 4D space-time. That 4D space-time orientation will change as time progresses. We want to line-up (register) that pixel across the multiple frames. Super Resolution thus tracks implicit information about the motion of the pixel across 4D space-time, and corrects for that motion. The result of the process is a single higher-resolution image.

The practical implications are this:


  • Frame Averaging works well when the object of interest doesn't move. The frames are averaged and the things that are different across frames are removed and the things that are the same remain.
  • To help with a Frame Averaging exercise, we can use a perspective registration process to align the item of interest - a license plate for example - across frames. This works well when the item has moved to an entirely new location, like in low frame rate video.

But, when the motion is subtle, super resolution is a better choice.

Here's an example. The park service was investigating a vandalism and poaching incident. There's a video that they believe was taken in the area of the incident. Within the video, there's a sign in the background that contains location information (text) that's blurred by the motion of the shaking, hand-held camera. There's enough motion to eliminate Frame Average as a processing choice. There's not enough motion to use a perspective registration function to align the sign correctly. Super resolution is the best choice to correct for the motion blur and register the pixels that make the text of the sign.

In this case, super resolution was indeed the best choice. The sign's information was revealed and the location was determined.

And now the potential pitfalls ...

  • Brand new pixels and pixel neighborhoods are created in this process.
  • A brand new piece of evidence (demonstrative) is created in this process.
Whenever you perform a perspective registration, your geometric transform necessarily creates new pixels and neighborhoods. In FIVE, during the process of using the filters, the creation is "virtual" in that it all happens in CPU and RAM. These new pixels and neighborhoods are really only created when you write the results of your processing out as a new file.

That brand new piece of evidence that you've created - the results written out - is a demonstrative that you've just created. You must explain it's relationship to the actual evidence files and how it came to be. Indeed, you've just added a new file to the case. This fact should be disclosed in your report.

With the reports in FIVE, there is the plain English statement about the process that is lifted from the many academic papers from which Amped SRL gets their filters. Sure, when you're asked about the process  performed, you can likely just read the report's description. But, what if the Trier of Fact wants to know more? How confident are you that you can explain super resolution?

Consider super resolution's main use - license plate enhancement. Your derivative file is a demonstrative in support on one side's theory of the case. Your derivative is illustrative of your opinion. Did you use the tool correctly? Are the results accurate? Is seeing believing? Given the ultra low resolutions we're usually dealing with, a slight shift in pixels can make a big difference in rendering alpha-numeric characters. This is part of the reason Amped SRL likes to use European license plates in their classes and PR - they're easy to fix. Not so in the US.

Advice like that shown above is the value of independence. A manufacturer's rep can really only show you the features. I'll show you not only how a tool works, but how to use it in different contexts, why it's sometimes inappropriate to use, and how to frame it's use during testimony. If you're interested in diving deep into the discipline of video forensics, I invite you to an upcoming course. See our offerings on our web site.

Have a great day, my friends.

Tuesday, January 7, 2020

The FTC vs Axon? Axon vs the FCT? Wow!

Whilst we were all minding our own business, it seems that the US Federal Trade Commission was busy investigating Axon for anti-competitive behavior. Last Friday, Axon CEO, Rick Smith, penned a piece on LinkedIn to make his case to the public. According to Smith, the FTC believes that Axon's acquisition of VieVu in 2018 was anti-competitive.

I'm not a fly on the wall. I only know what I've read in Smith's post and the subsequent reporting and interviews. To be fair, the press has let Smith tell Axon's side of the story. For the government's side, we have only a press release on the FTC's web site.

In terms of disclosure, it should be noted that within the scope of my prior employment with Amped Software, Inc. (an Axon partner), I worked closely with several internal business units within Axon.

But, I want to break down the FTC's press release to attempt to determine what's really the problem here.

Paragraph 1: "The Federal Trade Commission has issued an administrative complaint (a public version of which will be available and linked to this news release as soon as possible) challenging Axon Enterprise, Inc.’s consummated acquisition of its body-worn camera systems competitor VieVu, LLC. Before the acquisition, the two companies competed to provide body-worn camera systems to large, metropolitan police departments across the United States."

Analysis: Yes, they were in the same market. But, given the many quality issues with VieVu's product line, they weren't really competing - in the same way that the best Texas high school football team is really no competition for the worst of the NFL in any given year. My opinion on how VieVu got "competitive" on deals was to compete on price, not on quality. It was their low price that got them in the door at police departments, but it was their lack of quality that ruined the company.

Paragraph 2: "According to the complaint, Axon’s May 2018 acquisition reduced competition in an already concentrated market. Before their merger, Axon and VieVu competed to sell body-worn camera systems that were particularly well suited for large metropolitan police departments. Competition between Axon and VieVu resulted in substantially lower prices for large metropolitan police departments, the complaint states. Axon and VieVu also competed vigorously on non-price aspects of body-worn camera systems. By eliminating direct and substantial competition in price and innovation between dominant supplier Axon and its closest competitor, VieVu, to serve large metropolitan police departments, the merger removed VieVu as a bidder for new contracts and allowed Axon to impose substantial price increases, according to the complaint."

Analysis: Given the analysis of the first paragraph, VieVu was certainly not "particularly well suited" to deliver on any department's needs. Additionally, it wasn't the "competition" that drove prices down, it was VieVu's essentially offering their goods below cost to get in the door that drove prices down. Selling below cost isn't sustainable, and police agencies must look at all factors of a vendor - like the fact that unsustainable business practices will likely mean that the company won't be around throughout the lifecycle of the product.

Paragraph 3: “Competition not only keeps prices down, but it drives innovation that makes products better,” said Ian Conner, Director of the FTC’s Bureau of Competition. “Here, the stakes could not be higher. The Commission is taking action to ensure that police officers have access to the cutting-edge products they need to do their job, and police departments benefit from the lower prices and innovative products that competition had provided before the acquisition.”

Analysis: the market is still chocked full of offerings. There's Motorola/Watchguard, Panasonic, Getac, Utility, Coban, Visual Labs/Samsung, L3/Mobile Vision, and Digital Ally, plus over 10k results from China on alibaba.com. You can get a body camera from China's LS Vision for under $100/unit. That's a lot of competition.

Paragraph 4: "The complaint also states that as part of the merger agreement, Axon entered into several long-term ancillary agreements with VieVu’s former parent company, Safariland, that also substantially lessened actual and potential competition. These agreements barred Safariland from competing with Axon now and in the future on all of Axon’s products, limited solicitation of customers and employees by either company, and stifled potential innovation or expansion by Safariland. These restraints, some of which were intended to last more than a decade, are not reasonably limited to protect a legitimate business interest, according to the complaint."

Analysis: This part is just silly. Axon says to Safariland, known for their holsters and gear, stay with what you're good at (holsters and gear) and we'll stay with what we're good at. Stay out of our lane, and we'll stay out of yours. This is a good business decision, not anti-competitive behavior. You also have to be myopic to not consider that Safariland only bought VieVu in 2015. According to the WSJ, "Vievu LLC, a maker of police body cameras, has been acquired by Safariland LLC, which is bulking up its portfolio of security products ahead of a planned initial public offering next year." Safariland's entry into other vertical markets followed a similar pattern. But, at their heart, they're a holster and gear company, so their exit from the technology sector is no great loss.

Paragraph 5: "The Commission vote to issue the administrative complaint was 5-0. The administrative trial is scheduled to begin on May 19, 2020."

Analysis: What is missing is a specific citation as to which federal laws were violated. Likely, there was no specific violation of US law, but rather a violation of an FTC Rule. The FTC has the authority to pass and enforce it's own rules outside of the normal US law making process. Smith outlines the administrative hearing process in his op-ed. Smith is correct, this won't see a "court room," as the vast majority of FTC processes are kept in-house.

An examination of the FTC's "Competition Enforcement Database" found only 25 competition enforcement actions for 2018, which was down from 2017's 32 actions. Given the totality of commercial activity in the US, this is an incredibly small number. The assumption is that they only go after the most egregious of behaviors. If that's the case, what's really behind this action against Axon. VieVue was delivery faulty products. It was losing deals on it's own. Axon did a mutually beneficial deal with Safariland to take VieVu off their hands. What's actually wrong with this? Does this rise to a Standard Oil or AT&T level? Hardly. So why this case? That's the problem with administrative processes, we'll never know. There's a complete lack of transparency into their decision making or procedures.

I do tend to agree with Smith that this issue rises above brands and technology. It's a peek into the workings of the Administrative State in the US. What remains to be seen is if the US government grants Axon permission to sue the FTC. Stay tuned.