Tuesday, April 9, 2019

UX Talk: Amped FIVE's Change Frame Rate filter

The Change Frame Rate filter was introduced into FIVE in 2014. It's functionality is quite simple. You can adjust a file's frame rate, targeting a specific frame rate or a specific time.

For today's post, I have some old files that I use in my Advanced Processing Techniques class. The old DVR that is the source of the class' files only had enough processing power to generate a file that covered 1 minute per camera. Each camera view's recordings were stored in separate 1-minute chunks of data, in separate folders, in a crazy folder scheme. Using Concatenate would be a nightmare as the video files are all named the same.

When creating a proxy file for these files, FFMPEG does not recognize the fps tag in the container. Thus, it defaults to 25fps - which is not correct. Notice the screen shot above. At 25fps, the total time is about 10 seconds. Yet, the file captured 1 minute of time. Obviously, the frame rate is wrong. This type of use case is exactly why I like having this functionality in my toolset. I can manually enter the Total Time value and FIVE will calculate the resulting Frame Rate (ish).

The (ish) - the caveat - using this filter is not without it's problems and pains. Being divorced from the Amped team means I can show you the problems and work-arounds without getting in trouble from the boss.

For this project, you might be tempted to load each 1-minute file, use the new Timeline filter, then change the frame rate as the final step (after using Timeline). The problem here, and the reason I feature this in training, is the high variability in the number of frames within each of the one minute long data files. Thus, I'd want to utilize the Change Frame Rate filter in each minute's processing chain in order to accurately represent time. It's a slog, but it's doable.

If you're a savvy user, you might be thinking, "Hey Jimmy. Just copy the filter instance from one chain and paste it into the other chains." You might think that. You'd be wrong. It doesn't work that way. If you copy / paste the filter, it retains the settings from the source and doesn't update itself with the destination chain's values. You'll want to change the values, but it still won't update. So, it's best to just use a new instance of the filter for each chain.

Another issue with the filter (I'm hoping that all of this gets fixed, in which case I'll post an update to this post) deals with the way in which users enter values in the Total Time field. In the screen shot below, you'll notice that the Frame Rate is 3.7415. I got this incorrect value because of the way in which I edited Total Time. I selected only the "tens" and changed it to zero, the "ones" value was already zero so I left it alone. Doing so caused an incorrect result. To get a correct result, you'll need to select the entirety of "seconds" (or minutes / hours if applicable) and make the change. Yes, I know, this is a lot to keep track of.

If you've restored the frame rate in each processing chain separately, combining the processing chains with the Timeline filter will be necessary. You might think that doing so will result in a file where the Total Time for each chain will add up to a nice round number in the Timeline filter's results. Again, you would be wrong.

For my project, I have 15 files in 15 folders. Each file represents 1 minute. Each file gets it's own processing chain. Each chain is adjusted to 1 minute of total time. Each chain's frame rate is somewhere around 4fps (see above).  But, in combining the chains with the Timeline filter, the program seems to favor the frame rates. The resulting duration, after using the Timeline filter, is not 15 minutes. It's closer to 14 minutes. Timeline seems to do some simple math in spite of all the work that I did in each processing chain - accounting for total frames and frame rate - ignoring the declared Total Time per chain.

These are the types of questions I ask when validating a tool. They're also the types of things that I check when validating a new build / release of a known tool. I don't assume anything. I test everything. In testing - I found this issue.

Does one frame per second make a difference? Maybe not. But, in my project, the uncorrected difference is 1 minute in 15 - or 4 minutes per hour. That's significant. Given that other filters, like Add Timestamp, rely upon correct timing, yes - it will make a difference.

So, do you process everything and change the frame rate on the combined results?

Or, do you change the frame rate in each processing chain, then correct the results after combining the chains utilizing the Timeline filter? You'll get a different frame rate for each - thus the "ish." Which is correct? Your validation of your individual case will tell you. Because of the variability, my results can only advise you of the affect, they can't inform your results or your experiments other than that.

Monday, April 8, 2019

UX Talk: Amped FIVE's Timeline Filter

A few months ago, Amped released Build 12727. That build featured some new functionality - the Timeline Filter.

What the Timeline Filter tries to do is to turn FIVE into an NLE by allowing you to combine processing chains. You can process segments individually, then combine them at the end of your work before writing out the finished results.

Timeline works a bit like Concatenate. The difference being that Concatenate works as a part of the "conversion" process and Timeline works as part of the processing tasks. With Concatenate, you get what you get. With Timeline, you can fine tune the results by selecting specific processing chains as well as the processing steps within those chains.

Timeline is part of the Link filter group. Like the other filters in that group, it doesn't support audio. If you're wondering where your audio track is after using the Timeline, Multiview, or Video Mixer filters, you're likely in North America. It seems, from observing the development of FIVE from here (the western US) that no one else in the world has to deal with audio in multimedia files.

The Filter Settings for the Timeline Filter are pretty simple. One thing to note, when you choose it, the settings will include all your project's processing chains. You'll want to sort through the Inputs and Remove any chains that don't belong (click on the Input to remove and click on the Remove button).

Once combined, write the resulting file out to your desired codec / container.

If your files do have an audio track, and you want the results to also have the audio track, you'll need to strip out the audio first (part of the Convert DVR functionality). Once you've performed your processing / analysis tasks, you'll want to re-combine the audio and video segments in an NLE like Vegas or PremierePro.

Friday, April 5, 2019

Hang on to those installers

Early on in my career, I learned to hang on to the installer files for the programs that I encountered. At the time of the retrieval, I thought (wrongly) that I had the correct installer file already and didn't need to hang on to the file that came out of the evidence DVR. That decision came back to haunt me later when doing the analysis as the DVR's owner replaced the DVR before we could go back to retrieve the program's installer from the machine. This was a long time ago, but the lesson stuck. Assume nothing. Keep everything together.

Fast forward a decade and a half.

Amped SRL made a big deal about their incorporation of the Genetec SDK in FIVE, and FIVE's support of the .g64 and .g64x file types in 2017 (Build 10039). Indeed it was a big deal. Genetec is a popular company and agencies around the western world face these file types quite often. Genetec makes a player for their files, but to export individual camera views from their player takes a lot of time to accomplish - especially in major cases with many relevant camera views.

Fast forward to yesterday.

As you may have heard, I'm no longer part of the Amped team. Amped support is now handled through their support web site. There's no more phone support for North American customers (announced via a change to their EULA that customers likely accepted without reading). But, with an unpronounceable Scandinavian last name, I'm easy to find. I've been getting many calls per day about various analysis-related / software things. Although I can't support software directly, I can still answer UX (how-to) type questions. I also still conduct training sessions around Amped's software (click here for the calendar of offerings).

Yesterday, an old friend called me in a panic. He had a major case, the files were .g64x, and FIVE wasn't working with them anymore. Yes, he knew that I'm no longer with Amped Software. But, he was running out of options and time.

What he didn't know, what most of Amped's customers don't know, is that in March of 2018, Amped SRL quietly removed the Genetec functionality from it's product line (Build 10717). It's normally verbose change log notes only that this build was an interim update - nothing about what was changed. The functionality was not only removed from 10717, but also 10039. Any mention of the Genetec functionality was stricken from the logs. If you search for Genetec on the Amped blog, you will not find anything. It's as if the Genetec integration never happened.

A few weeks ago, another old customer called with a similar story. In his case, he had performed the original work with the .g64 files in Build 10039. The case is moving to trial and he's updated FIVE faithfully. He's been asked to do some additional work on the case. But, the project file doesn't work. Usually, it's quick to duplicate your work and recreate a corrupted project file. But in this case, he can't. He no longer has the ability to work with these files in the same way. It's caused a huge issue with his case - a huge issue he didn't know was coming.

That brings me back to that initial lesson in dealing with the first generation of DVRs - save everything together. Yes, Amped's support portal gives users under support contracts the ability to download previous versions. But, what several have discovered, Amped has changed functionality to at least one of those installer builds (preserving the old build number in rolling out the updated version doesn't help). What else has changed in what's hosted there? How would you know?

All hope was not lost. An older computer of mine was still configured for support of Axon's sales team. I still had the original Axon FIVE Build 10039 on that laptop for a demo done quite some time ago in support of Axon's sales team.  I was able to help the users solve their problems with the Genetec files. I was happy to do so. If you need some help as well, let me know.

Thursday, April 4, 2019

The "NAS Report," 10 years later

It's been a bit over 10 years since Strengthening Forensic Science in the United States, a Path Forward was released (link). What's changed since then?

As a practitioner and an educator, Chapter 8 was particularly significant. Here's what we should all know - here's what we're all responsible for knowing.

"Forensic Examiners must understand the principles, practices, and contexts of science, including the scientific method. Training should move away from the reliance on the apprentice-like transmittal of practices to education ..."

10 years later, has the situation changed? 10 years later, there are a ton of "apprentice-like" certification programs and just a handful of college programs. College programs are expensive and time consuming. Mid-career professionals don't have the time to sit in college classes for years. Mid-career professionals don't have the money to pay for college. Those that have retired from public service and "re-entered" their profession on the private side face the same challenges.

Years ago, I designed a curriculum set for an undergraduate education in the forensic sciences. It was designed such that it could form the foundation for graduate work in any of the forensic sciences - with electives being being discipline specific. I made the rounds of schools. I made the rounds of organizations like LEVA. I made my pitch. I got nowhere with it. Colleges are non-profit in name only, it seems.

To address the problems of time and money, as well as proximity, I've moved the box of classes on-line. The first offering is out now - Statistics for Forensic Analysts. It's micro learning. It's "consumer level," not "egg head" level. It's paced comfortably. It's priced reasonably. It's entirely relevant to the issues raised in the NAS Report, as well as the issues raised in this month's edition of Significance Magazine.

I encourage you to take a look at Statistics For Forensic Analysts. If you've read the latest issue of Significance, and you have no idea what they're talking about or why what they're talking about is vitally important, you need to take our stats class. Check out the syllabus and sign-up today.

Wednesday, April 3, 2019

Why do you need science?

An interesting morning's mail. Two articles released overnight deal with forensic video analysis. Two different angles on the subject.

First off, there's the "advertorial" for the LEVA / IAI certification programs in the Police Chief Magazine.

The pitch for certification was complicated by this image:

The caption for the image further complicated the message for me: "Proper training is required to accurately recover or enhance low-resolution video and images, as well as other visual complexities."

Whilst the statement is true, do you really believe that the improvements to the image happened from the left to the right? Perhaps, for editorial purposes, the image was degraded, from the original (R) to the result (L). If I'm wrong about this, I'd love to see the case notes and the specifics as to the original file. Can you imagine such a result coming from the average CCTV file? Hardly.

Next in the bin was an opinion piece in the Washington Post's Radley Balko - "Journalists need to stop enabling junk forensics." It's seemingly the rebuttal to the LEVA / IAI piece.

Balko picks up where the ProPublica series left off - an examination of the discipline in general, and Dr. Vorder Brugge of the FBI in particular. It's an opinion piece, and it's rather pointed in it's opinion of the state of the discipline. Balko, like ProPublica, has been on this for a while now (here's another Balko piece on the state of forensic science in the US).

I don't disagree with any of the referenced authors here. Not one bit. Jan and Kim are correct in that the Trier of Fact needs competent analysts working cases. Balko is correct in that the US still rather sucks at science. That we suck as science was the main reason the Obama administration created the OSAC and the reason Texas created it's licensing scheme for analysts.

Where I think I disagree with Jan and Kim is essentially a legacy of the Daubert decision. Daubert seemingly outsourced the qualification process to third parties. It gave rise to the certification mills and to industry training programs. Training to competency means different things to different organizations. For example, I've been trained to competency on the use of Amped's FIVE and Authenticate. But, none of that training included the underlying science behind how the tool is used in case work. For that, I had to go elsewhere. But, Amped Software, Inc, certified me as a user and a trainer of the tools. That (certification) was just a step in the journey to competency, not the destination.

Balko, like ProPublica, notes the problems with pattern evidence exams. Their points are valid. But, it doesn't mean that image comparison can't be accomplished. It does mean that image comparisons should be founded in science. One of those sciences is certainly image science (knowing the constituent parts of the image / video and how the evidence item was created, transmitted, stored, retrieved, etc. But another one of the sciences necessary is statistics (and experimental design).

As I noted in my letter to the editor of the Journal of Forensic Identification, experimental design and statistics form a vital part of any analysis. For pattern matching, the evidence item may match the CCTV footage. But, would a representative sample of similar items (shirts, jeans, etc) also match? Can you calculate probabilities if you're unaware of the denominator in the function (what's the population of items)? Did you calculate the sample size properly for the given test? Do you have access to a sample set? If not, did you note these limitations in your report? Did these limitations inform your conclusions?

Both LEVA and the IAI have a requirement for their certified analysts to seek and complete additional training / education towards eventual recertification. This is a good thing. But, as many of us know, there are only so many training opportunities. At some point, you kind of run out of classes to take is a common refrain. Whilst this may be true for "training" (tool / discipline specific), this is so not true for education. There are a ton of classes out there to inform one's work. The problem there becomes price / availability. This price / availability problem was the primary driver behind my taking my Statistics class out of the college context and putting it on-line as micro learning. My other classes from my "curriculum in a box" concept will roll out later this year and into the next year.

So to the point of the morning's articles - yes, you do need a trained / educated analyst. Yes, that analyst needs to engage in a scientific experiment - governed both by image science as well as experimental science. Forensic science can be science, if it's conducted scientifically. Otherwise, it becomes a rhetorical exercise utilizing demonstratives to support it's unreproducible claims.