Featured Post

Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...

Tuesday, December 18, 2018

Sample Sizes and Speed Calculations, oh my!

There's been a lot of talk lately about using the footage from a DVR to determine the speed of an object depicted on the video. In my classes on the topic, I explain how to set up the experiment to validate the results of your tests. In this post, I want to present a few snapshots of the steps in the validation.

It's been well documented that DVRs are not Swiss chronographs, they're mostly a cheap box of random parts. The on-screen time stamps have been shown to be "estimates" and "approximations" of time - not entirely reliable. It's also well documented that the files' metadata contains hints about time. Let's take a look at a few scenarios.

Test 1: Geovision 


The question in this case was, is the particular evidence item reliable in it's generation of frames such that the frame rate metadata could be used for speed calculation?

A sample size calculation was performed to see how many tests would need to be performed to build a model of the DVR's performance. In this way, we'd know if the evidence item was typical of the performance of the DVR or a one-time error.


Analysis: A priori: Compute required sample size 
Input: Tail(s)                  = One
Proportion p2            = 0.8
α err prob               = 0.05
Power (1-β err prob)     = 0.99
Proportion p1            = 0.5
Output: Lower critical N         = 24.0000000
Upper critical N         = 24.0000000
Total sample size        = 37
Actual power             = 0.9907227

Actual α                 = 0.0494359

The calculation determined that a total of 37 tests (sample size = 37) would yield the best results of our generic binomial test (works correctly / doesn't work correctly). On the other end of the graph, for a sample size less than 10, a coin flip would have been more accurate.

The Frame Analysis shown above, generated by Amped FIVE, seems to indicate that the I Frame generation is fairly regular and perhaps the P Frames are duplicates of the previous I Frame. You'd only get this information from an analysis of the metadata - plus a hash of each frame. Nothing viewed on-screen, watching the video play, would give you this specific information.

It's important to note that this one video represents one channel in a 16-channel DVR. The DVR features motion / alarm activation of it's recordings. It took a bit of digging to find out how many camera streams were active (motion / alarm) during the time the evidence item was being recorded.

With the information in hand, we created an experiment to reproduce the original recording conditions. 

But first, a couple of important definitions are needed.

Observational Data: It's an observational study which they observe things in different circumstances over which the researcher has no control.

Experimental Data: It's data that is collected from an experimental study that involves taking measurements which can be controlled. 

Our experiment in this case was "experimental." We were able to control which of the DVRs channels were actively recording - when and for how long.

With the 37 tests conducted and the data recorded, it was determined that the average recording rate within the DVR - for the channel that recorded the evidence item - was effectively seven seconds per frame. Essentially, the DVR was so overwhelmed with data that it could not process all of the incoming signal effectively. It did the best that it could, but in it's "error state," the I Frames were copied to fill the data container. Some I Frames were even duplicates of previous I Frames. This was likely due to a rule about the fps needed to create the piece of media - the system's native container format was .avi.

Test 2: Dahua 

In a second test, a "generic" black box DVR was tested. The majority of the parts could be sourced to Dahua (China). The 16 camera system outputs a native file with a .264 extension. 

The "forensic applications," FIVE included, are all based on FFMPEG for the "conversion" of these types of files. After conversion, the report of the processing indicated that the fps of the output video was 25fps. BUT, this was recorded in the US. 

Is 25fps the correct rate?
Is 25fps an "error state?"
If 25fps isn't the "correct rate," what should it be?

In this case, the frame rate information in the original container was "non-standard."  As such, FFMPEG had no way of identifying what "it should be" and thus defaulted to 25fps - the output container needs to know it's playback rate. Why 25fps? FFMPEG is French - where the playback rate (PAL) is 25fps.

In this case, we didn't have an on-screen timestamp to begin our work. Thus, we needed to conduct and experiment to attempt to calculate an effective frame rate for this particular DVR. This begins with a sample size calculation. How many tests do we need to build the model of the DVR's performance.


t tests - Linear multiple regression: Fixed model, single regression coefficient

Analysis: A priori: Compute required sample size 
Input: Tail(s)                       = One
Effect size f²                = 0.15
α err prob                    = 0.05
Power (1-β err prob)          = 0.99
Number of predictors          = 2
Output: Noncentrality parameter δ     = 4.0062451
Critical t                    = 1.6596374
Df                            = 104
Total sample size             = 107
Actual power                  = 0.9902320

In order to control for all of the points of data entry into the system (16 channels) as well as data choke points (chips and busses), our sample size has increased quite significantly. It's not a simple yes/no test as in Test 1 (above).

A multiple linear regression attempts to model the relationship between two or more explanatory variables and a response variable by fitting a linear equation to observed data. Essentially, how do the channels, busses, and chips (independent / control variables) influence the resulting data container (dependent variable)?

The tests were run and the data assembled. It was found that the effective frame rate of the channel that recorded our evidence was 1.3526fps. If you had just accepted the 25fps given to you by FFMPEG, the display of the video would have been inaccurate. Using the 25fps for the speed calculation would also yield inaccurate results. Having the effective frame rate, plus the validation of the results, helps the entire process trust your results.

It certainly helps that my tool of choice in working with the video data, Amped FIVE, contains the tools necessary to analyse the data. I can hash each frame (hash is now part of the Inspector Tools in FIVE). I can analyse the metadata (see above). Plus, I can adjust the playback rate precisely (Change Frame Rate filter).


These examples illustrate the distinct difference between what one "knows" and what one can "prove." We can "know" what the data stream tells us via the various frame analysis tools that are available. We can "prove" the validity of our results by conducting the appropriate tests, utilizing the appropriate number of tests (sample size).

If you are interested in knowing more about this topic, or if you need help on a case, or if you'd like to learn how to do these tests yourself, you can find out more by clicking here.

No comments: