In the common law of torts, res ipsa loquitur (Latin for "the thing speaks for itself") is a doctrine that infers negligence from the very nature of an accident or injury in the absence of direct evidence on how any defendant behaved. Although modern formulations differ by jurisdiction, common law originally stated that the accident must satisfy the necessary elements of negligence: duty, breach of duty, causation, and injury. In res ipsa loquitur, the elements of duty of care, breach, and causation are inferred from an injury that does not ordinarily occur without negligence.
Res ipsa loquitur is often confused with prima facie ("at first sight"), the common law doctrine that a party must show some minimum amount of evidence before a trial is worthwhile.
(Source)
What on earth does this have to do with multimedia analysis? In so many cases, attorneys argue and judges agree that a video / image is what it is - that it speaks for itself - that an analyst is not needed to explain crucial elements of the evidence item.
Take this image:
The upper section is a depiction of the Chuvash State Opera and Ballet Theater in Cheboksary, Russia. This icon of Brutalist architecture is one of the great examples of Brutalism in Russia, and a must see for tourists.
The lower section layers in (Photoshops) elements from Star Wars.
Under res ipsa loquitur - the lower section speaking for itself - it could be argued that Imperial troops have occupied central Russia. This is, of course a ridiculous idea. Here, we're using absurdity to illustrate the absurd.
In terms of modern video evidence - can the video indeed speak for itself? Is the object of interest an artifact of compression, a result of noise, or an element of the scene? How would you know? You would engage an analyst - like me.
Any case involving Forensic Photographic Comparison or Forensic Content Analysis must necessarily start with a "ground truth assessment;" what are the elements of the area of interest. A macro block analysis is often performed. Is the area of interest populated by lossless encoded data, copied data, predicted data, or error? How would you know? You would know by using validated tools and your trained mind / eyes.
In Authentication cases, this is no different. Fakes are getting sophisticated and easier to perform. Analysts need valid tools, training, and education. This is why the training / educational sessions that I offer move beyond simple button pushing into a deeper understanding of the underlying technology as well as how forgeries are created in the first place. Because "false positives" and "false negatives" can be frequent in authentication tools that are built around data sets, one must know how to interpret results as well as how to successfully validate them across a wider variety of methods that do not only involve the software being used. This is why statistics goes hand in hand with the domains of analysis.
If you're interested in training / education, you can sign up for our on-going / upcoming offerings on the web site. Seats are available. Also, with our on-line micro learning offerings, seats are always available as you learn at your own location.
Featured Post
Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...
Thursday, April 25, 2019
Wednesday, April 24, 2019
UX Talk: Amped Authenticate - Batch File Format Comparison
One of the problems with products that generate results based on a comparison to an internal database is errors - false positives / false negatives. As much as I like Amped's Authenticate, it does suffer from this problem. The new reporting feature makes this worse, not better.
Here's the scenario.
You load an image and check the File Information, or you run the new "Smart Report." One of the results in this panel indicates a mismatch of the JPEG's Quantization Table - "Compression signature is incompatible with the actual camera make-model."
How do you know if this result comes from an actual mismatch? How do you know if this result is in fact a "non match," meaning the evidence file comes from a camera who's JPEG QT information isn't actually in the product's database - in this case Amped Authenticate?
Answer: run a batch File Format Comparison against a valid sample set of random images taken by the same model of camera.
I've written previously about the utility of the Camera Forensics service. This is one of those instances where the service comes in handy. Assemble your sample and drop a copy of your evidence image into the folder of samples. Then run the Comparison.
When the report is generated, scroll across to find the column for JPEG QT hash. If, in fact, your evidence image is a mismatch - "Compression signature is incompatible with the actual camera make-model" - then the evidence image's JPEG QT hash should not match the JPEG QT of any of the sample file. In my case, all of the samples matched each other, and matched the evidence file. The result from the software was false.
Were this your case, an actual case, you would want to thoroughly document this process of validating the results. You wouldn't want to leave the statement of "mismatch" untested. Yes, the software returned the result. However, we assembled a valid sample set to test the results. Here, we found the software in error (likely the software's internal database was incomplete). We found that the evidence file's JPEG QT matched the JPEG QT for all of the sample files.
The pace of change in camera technology is quite brisk. The issue is further complicated by the fact that camera technology is a bit regional - meaning there are cameras sold in some areas but not others. Additionally, not all manufacturers make their QTs available to developers. Very few do, actually. Thus, is it realistic to find all QTs present in a particular software's database? Of course not. Therefor, you'll need to remember to validate your results - especially if you get a "false" result.
You load an image and check the File Information, or you run the new "Smart Report." One of the results in this panel indicates a mismatch of the JPEG's Quantization Table - "Compression signature is incompatible with the actual camera make-model."
How do you know if this result comes from an actual mismatch? How do you know if this result is in fact a "non match," meaning the evidence file comes from a camera who's JPEG QT information isn't actually in the product's database - in this case Amped Authenticate?
Answer: run a batch File Format Comparison against a valid sample set of random images taken by the same model of camera.
I've written previously about the utility of the Camera Forensics service. This is one of those instances where the service comes in handy. Assemble your sample and drop a copy of your evidence image into the folder of samples. Then run the Comparison.
When the report is generated, scroll across to find the column for JPEG QT hash. If, in fact, your evidence image is a mismatch - "Compression signature is incompatible with the actual camera make-model" - then the evidence image's JPEG QT hash should not match the JPEG QT of any of the sample file. In my case, all of the samples matched each other, and matched the evidence file. The result from the software was false.
Were this your case, an actual case, you would want to thoroughly document this process of validating the results. You wouldn't want to leave the statement of "mismatch" untested. Yes, the software returned the result. However, we assembled a valid sample set to test the results. Here, we found the software in error (likely the software's internal database was incomplete). We found that the evidence file's JPEG QT matched the JPEG QT for all of the sample files.
The pace of change in camera technology is quite brisk. The issue is further complicated by the fact that camera technology is a bit regional - meaning there are cameras sold in some areas but not others. Additionally, not all manufacturers make their QTs available to developers. Very few do, actually. Thus, is it realistic to find all QTs present in a particular software's database? Of course not. Therefor, you'll need to remember to validate your results - especially if you get a "false" result.
Tuesday, April 9, 2019
UX Talk: Amped FIVE's Change Frame Rate filter
The Change Frame Rate filter was introduced into FIVE in 2014. It's functionality is quite simple. You can adjust a file's frame rate, targeting a specific frame rate or a specific time.
For today's post, I have some old files that I use in my Advanced Processing Techniques class. The old DVR that is the source of the class' files only had enough processing power to generate a file that covered 1 minute per camera. Each camera view's recordings were stored in separate 1-minute chunks of data, in separate folders, in a crazy folder scheme. Using Concatenate would be a nightmare as the video files are all named the same.
When creating a proxy file for these files, FFMPEG does not recognize the fps tag in the container. Thus, it defaults to 25fps - which is not correct. Notice the screen shot above. At 25fps, the total time is about 10 seconds. Yet, the file captured 1 minute of time. Obviously, the frame rate is wrong. This type of use case is exactly why I like having this functionality in my toolset. I can manually enter the Total Time value and FIVE will calculate the resulting Frame Rate (ish).
The (ish) - the caveat - using this filter is not without it's problems and pains. Being divorced from the Amped team means I can show you the problems and work-arounds without getting in trouble from the boss.
For this project, you might be tempted to load each 1-minute file, use the new Timeline filter, then change the frame rate as the final step (after using Timeline). The problem here, and the reason I feature this in training, is the high variability in the number of frames within each of the one minute long data files. Thus, I'd want to utilize the Change Frame Rate filter in each minute's processing chain in order to accurately represent time. It's a slog, but it's doable.
If you're a savvy user, you might be thinking, "Hey Jimmy. Just copy the filter instance from one chain and paste it into the other chains." You might think that. You'd be wrong. It doesn't work that way. If you copy / paste the filter, it retains the settings from the source and doesn't update itself with the destination chain's values. You'll want to change the values, but it still won't update. So, it's best to just use a new instance of the filter for each chain.
Another issue with the filter (I'm hoping that all of this gets fixed, in which case I'll post an update to this post) deals with the way in which users enter values in the Total Time field. In the screen shot below, you'll notice that the Frame Rate is 3.7415. I got this incorrect value because of the way in which I edited Total Time. I selected only the "tens" and changed it to zero, the "ones" value was already zero so I left it alone. Doing so caused an incorrect result. To get a correct result, you'll need to select the entirety of "seconds" (or minutes / hours if applicable) and make the change. Yes, I know, this is a lot to keep track of.
If you've restored the frame rate in each processing chain separately, combining the processing chains with the Timeline filter will be necessary. You might think that doing so will result in a file where the Total Time for each chain will add up to a nice round number in the Timeline filter's results. Again, you would be wrong.
For my project, I have 15 files in 15 folders. Each file represents 1 minute. Each file gets it's own processing chain. Each chain is adjusted to 1 minute of total time. Each chain's frame rate is somewhere around 4fps (see above). But, in combining the chains with the Timeline filter, the program seems to favor the frame rates. The resulting duration, after using the Timeline filter, is not 15 minutes. It's closer to 14 minutes. Timeline seems to do some simple math in spite of all the work that I did in each processing chain - accounting for total frames and frame rate - ignoring the declared Total Time per chain.
These are the types of questions I ask when validating a tool. They're also the types of things that I check when validating a new build / release of a known tool. I don't assume anything. I test everything. In testing - I found this issue.
Does one frame per second make a difference? Maybe not. But, in my project, the uncorrected difference is 1 minute in 15 - or 4 minutes per hour. That's significant. Given that other filters, like Add Timestamp, rely upon correct timing, yes - it will make a difference.
So, do you process everything and change the frame rate on the combined results?
Or, do you change the frame rate in each processing chain, then correct the results after combining the chains utilizing the Timeline filter? You'll get a different frame rate for each - thus the "ish." Which is correct? Your validation of your individual case will tell you. Because of the variability, my results can only advise you of the affect, they can't inform your results or your experiments other than that.
For today's post, I have some old files that I use in my Advanced Processing Techniques class. The old DVR that is the source of the class' files only had enough processing power to generate a file that covered 1 minute per camera. Each camera view's recordings were stored in separate 1-minute chunks of data, in separate folders, in a crazy folder scheme. Using Concatenate would be a nightmare as the video files are all named the same.
When creating a proxy file for these files, FFMPEG does not recognize the fps tag in the container. Thus, it defaults to 25fps - which is not correct. Notice the screen shot above. At 25fps, the total time is about 10 seconds. Yet, the file captured 1 minute of time. Obviously, the frame rate is wrong. This type of use case is exactly why I like having this functionality in my toolset. I can manually enter the Total Time value and FIVE will calculate the resulting Frame Rate (ish).
The (ish) - the caveat - using this filter is not without it's problems and pains. Being divorced from the Amped team means I can show you the problems and work-arounds without getting in trouble from the boss.
For this project, you might be tempted to load each 1-minute file, use the new Timeline filter, then change the frame rate as the final step (after using Timeline). The problem here, and the reason I feature this in training, is the high variability in the number of frames within each of the one minute long data files. Thus, I'd want to utilize the Change Frame Rate filter in each minute's processing chain in order to accurately represent time. It's a slog, but it's doable.
If you're a savvy user, you might be thinking, "Hey Jimmy. Just copy the filter instance from one chain and paste it into the other chains." You might think that. You'd be wrong. It doesn't work that way. If you copy / paste the filter, it retains the settings from the source and doesn't update itself with the destination chain's values. You'll want to change the values, but it still won't update. So, it's best to just use a new instance of the filter for each chain.
Another issue with the filter (I'm hoping that all of this gets fixed, in which case I'll post an update to this post) deals with the way in which users enter values in the Total Time field. In the screen shot below, you'll notice that the Frame Rate is 3.7415. I got this incorrect value because of the way in which I edited Total Time. I selected only the "tens" and changed it to zero, the "ones" value was already zero so I left it alone. Doing so caused an incorrect result. To get a correct result, you'll need to select the entirety of "seconds" (or minutes / hours if applicable) and make the change. Yes, I know, this is a lot to keep track of.
Incorrect |
Correct |
For my project, I have 15 files in 15 folders. Each file represents 1 minute. Each file gets it's own processing chain. Each chain is adjusted to 1 minute of total time. Each chain's frame rate is somewhere around 4fps (see above). But, in combining the chains with the Timeline filter, the program seems to favor the frame rates. The resulting duration, after using the Timeline filter, is not 15 minutes. It's closer to 14 minutes. Timeline seems to do some simple math in spite of all the work that I did in each processing chain - accounting for total frames and frame rate - ignoring the declared Total Time per chain.
These are the types of questions I ask when validating a tool. They're also the types of things that I check when validating a new build / release of a known tool. I don't assume anything. I test everything. In testing - I found this issue.
Does one frame per second make a difference? Maybe not. But, in my project, the uncorrected difference is 1 minute in 15 - or 4 minutes per hour. That's significant. Given that other filters, like Add Timestamp, rely upon correct timing, yes - it will make a difference.
So, do you process everything and change the frame rate on the combined results?
Or, do you change the frame rate in each processing chain, then correct the results after combining the chains utilizing the Timeline filter? You'll get a different frame rate for each - thus the "ish." Which is correct? Your validation of your individual case will tell you. Because of the variability, my results can only advise you of the affect, they can't inform your results or your experiments other than that.
Monday, April 8, 2019
UX Talk: Amped FIVE's Timeline Filter
A few months ago, Amped released Build 12727. That build featured some new functionality - the Timeline Filter.
What the Timeline Filter tries to do is to turn FIVE into an NLE by allowing you to combine processing chains. You can process segments individually, then combine them at the end of your work before writing out the finished results.
Timeline works a bit like Concatenate. The difference being that Concatenate works as a part of the "conversion" process and Timeline works as part of the processing tasks. With Concatenate, you get what you get. With Timeline, you can fine tune the results by selecting specific processing chains as well as the processing steps within those chains.
Timeline is part of the Link filter group. Like the other filters in that group, it doesn't support audio. If you're wondering where your audio track is after using the Timeline, Multiview, or Video Mixer filters, you're likely in North America. It seems, from observing the development of FIVE from here (the western US) that no one else in the world has to deal with audio in multimedia files.
The Filter Settings for the Timeline Filter are pretty simple. One thing to note, when you choose it, the settings will include all your project's processing chains. You'll want to sort through the Inputs and Remove any chains that don't belong (click on the Input to remove and click on the Remove button).
Once combined, write the resulting file out to your desired codec / container.
If your files do have an audio track, and you want the results to also have the audio track, you'll need to strip out the audio first (part of the Convert DVR functionality). Once you've performed your processing / analysis tasks, you'll want to re-combine the audio and video segments in an NLE like Vegas or PremierePro.
What the Timeline Filter tries to do is to turn FIVE into an NLE by allowing you to combine processing chains. You can process segments individually, then combine them at the end of your work before writing out the finished results.
Timeline works a bit like Concatenate. The difference being that Concatenate works as a part of the "conversion" process and Timeline works as part of the processing tasks. With Concatenate, you get what you get. With Timeline, you can fine tune the results by selecting specific processing chains as well as the processing steps within those chains.
Timeline is part of the Link filter group. Like the other filters in that group, it doesn't support audio. If you're wondering where your audio track is after using the Timeline, Multiview, or Video Mixer filters, you're likely in North America. It seems, from observing the development of FIVE from here (the western US) that no one else in the world has to deal with audio in multimedia files.
The Filter Settings for the Timeline Filter are pretty simple. One thing to note, when you choose it, the settings will include all your project's processing chains. You'll want to sort through the Inputs and Remove any chains that don't belong (click on the Input to remove and click on the Remove button).
Once combined, write the resulting file out to your desired codec / container.
If your files do have an audio track, and you want the results to also have the audio track, you'll need to strip out the audio first (part of the Convert DVR functionality). Once you've performed your processing / analysis tasks, you'll want to re-combine the audio and video segments in an NLE like Vegas or PremierePro.
Friday, April 5, 2019
Hang on to those installers
Early on in my career, I learned to hang on to the installer files for the programs that I encountered. At the time of the retrieval, I thought (wrongly) that I had the correct installer file already and didn't need to hang on to the file that came out of the evidence DVR. That decision came back to haunt me later when doing the analysis as the DVR's owner replaced the DVR before we could go back to retrieve the program's installer from the machine. This was a long time ago, but the lesson stuck. Assume nothing. Keep everything together.
Fast forward a decade and a half.
Amped SRL made a big deal about their incorporation of the Genetec SDK in FIVE, and FIVE's support of the .g64 and .g64x file types in 2017 (Build 10039). Indeed it was a big deal. Genetec is a popular company and agencies around the western world face these file types quite often. Genetec makes a player for their files, but to export individual camera views from their player takes a lot of time to accomplish - especially in major cases with many relevant camera views.
Fast forward to yesterday.
As you may have heard, I'm no longer part of the Amped team. Amped support is now handled through their support web site. There's no more phone support for North American customers (announced via a change to their EULA that customers likely accepted without reading). But, with an unpronounceable Scandinavian last name, I'm easy to find. I've been getting many calls per day about various analysis-related / software things. Although I can't support software directly, I can still answer UX (how-to) type questions. I also still conduct training sessions around Amped's software (click here for the calendar of offerings).
Yesterday, an old friend called me in a panic. He had a major case, the files were .g64x, and FIVE wasn't working with them anymore. Yes, he knew that I'm no longer with Amped Software. But, he was running out of options and time.
What he didn't know, what most of Amped's customers don't know, is that in March of 2018, Amped SRL quietly removed the Genetec functionality from it's product line (Build 10717). It's normally verbose change log notes only that this build was an interim update - nothing about what was changed. The functionality was not only removed from 10717, but also 10039. Any mention of the Genetec functionality was stricken from the logs. If you search for Genetec on the Amped blog, you will not find anything. It's as if the Genetec integration never happened.
A few weeks ago, another old customer called with a similar story. In his case, he had performed the original work with the .g64 files in Build 10039. The case is moving to trial and he's updated FIVE faithfully. He's been asked to do some additional work on the case. But, the project file doesn't work. Usually, it's quick to duplicate your work and recreate a corrupted project file. But in this case, he can't. He no longer has the ability to work with these files in the same way. It's caused a huge issue with his case - a huge issue he didn't know was coming.
That brings me back to that initial lesson in dealing with the first generation of DVRs - save everything together. Yes, Amped's support portal gives users under support contracts the ability to download previous versions. But, what several have discovered, Amped has changed functionality to at least one of those installer builds (preserving the old build number in rolling out the updated version doesn't help). What else has changed in what's hosted there? How would you know?
All hope was not lost. An older computer of mine was still configured for support of Axon's sales team. I still had the original Axon FIVE Build 10039 on that laptop for a demo done quite some time ago in support of Axon's sales team. I was able to help the users solve their problems with the Genetec files. I was happy to do so. If you need some help as well, let me know.
Fast forward a decade and a half.
Amped SRL made a big deal about their incorporation of the Genetec SDK in FIVE, and FIVE's support of the .g64 and .g64x file types in 2017 (Build 10039). Indeed it was a big deal. Genetec is a popular company and agencies around the western world face these file types quite often. Genetec makes a player for their files, but to export individual camera views from their player takes a lot of time to accomplish - especially in major cases with many relevant camera views.
Fast forward to yesterday.
As you may have heard, I'm no longer part of the Amped team. Amped support is now handled through their support web site. There's no more phone support for North American customers (announced via a change to their EULA that customers likely accepted without reading). But, with an unpronounceable Scandinavian last name, I'm easy to find. I've been getting many calls per day about various analysis-related / software things. Although I can't support software directly, I can still answer UX (how-to) type questions. I also still conduct training sessions around Amped's software (click here for the calendar of offerings).
Yesterday, an old friend called me in a panic. He had a major case, the files were .g64x, and FIVE wasn't working with them anymore. Yes, he knew that I'm no longer with Amped Software. But, he was running out of options and time.
What he didn't know, what most of Amped's customers don't know, is that in March of 2018, Amped SRL quietly removed the Genetec functionality from it's product line (Build 10717). It's normally verbose change log notes only that this build was an interim update - nothing about what was changed. The functionality was not only removed from 10717, but also 10039. Any mention of the Genetec functionality was stricken from the logs. If you search for Genetec on the Amped blog, you will not find anything. It's as if the Genetec integration never happened.
A few weeks ago, another old customer called with a similar story. In his case, he had performed the original work with the .g64 files in Build 10039. The case is moving to trial and he's updated FIVE faithfully. He's been asked to do some additional work on the case. But, the project file doesn't work. Usually, it's quick to duplicate your work and recreate a corrupted project file. But in this case, he can't. He no longer has the ability to work with these files in the same way. It's caused a huge issue with his case - a huge issue he didn't know was coming.
That brings me back to that initial lesson in dealing with the first generation of DVRs - save everything together. Yes, Amped's support portal gives users under support contracts the ability to download previous versions. But, what several have discovered, Amped has changed functionality to at least one of those installer builds (preserving the old build number in rolling out the updated version doesn't help). What else has changed in what's hosted there? How would you know?
All hope was not lost. An older computer of mine was still configured for support of Axon's sales team. I still had the original Axon FIVE Build 10039 on that laptop for a demo done quite some time ago in support of Axon's sales team. I was able to help the users solve their problems with the Genetec files. I was happy to do so. If you need some help as well, let me know.
Thursday, April 4, 2019
The "NAS Report," 10 years later
It's been a bit over 10 years since Strengthening Forensic Science in the United States, a Path Forward was released (link). What's changed since then?
As a practitioner and an educator, Chapter 8 was particularly significant. Here's what we should all know - here's what we're all responsible for knowing.
"Forensic Examiners must understand the principles, practices, and contexts of science, including the scientific method. Training should move away from the reliance on the apprentice-like transmittal of practices to education ..."
10 years later, has the situation changed? 10 years later, there are a ton of "apprentice-like" certification programs and just a handful of college programs. College programs are expensive and time consuming. Mid-career professionals don't have the time to sit in college classes for years. Mid-career professionals don't have the money to pay for college. Those that have retired from public service and "re-entered" their profession on the private side face the same challenges.
Years ago, I designed a curriculum set for an undergraduate education in the forensic sciences. It was designed such that it could form the foundation for graduate work in any of the forensic sciences - with electives being being discipline specific. I made the rounds of schools. I made the rounds of organizations like LEVA. I made my pitch. I got nowhere with it. Colleges are non-profit in name only, it seems.
To address the problems of time and money, as well as proximity, I've moved the box of classes on-line. The first offering is out now - Statistics for Forensic Analysts. It's micro learning. It's "consumer level," not "egg head" level. It's paced comfortably. It's priced reasonably. It's entirely relevant to the issues raised in the NAS Report, as well as the issues raised in this month's edition of Significance Magazine.
I encourage you to take a look at Statistics For Forensic Analysts. If you've read the latest issue of Significance, and you have no idea what they're talking about or why what they're talking about is vitally important, you need to take our stats class. Check out the syllabus and sign-up today.
As a practitioner and an educator, Chapter 8 was particularly significant. Here's what we should all know - here's what we're all responsible for knowing.
"Forensic Examiners must understand the principles, practices, and contexts of science, including the scientific method. Training should move away from the reliance on the apprentice-like transmittal of practices to education ..."
10 years later, has the situation changed? 10 years later, there are a ton of "apprentice-like" certification programs and just a handful of college programs. College programs are expensive and time consuming. Mid-career professionals don't have the time to sit in college classes for years. Mid-career professionals don't have the money to pay for college. Those that have retired from public service and "re-entered" their profession on the private side face the same challenges.
Years ago, I designed a curriculum set for an undergraduate education in the forensic sciences. It was designed such that it could form the foundation for graduate work in any of the forensic sciences - with electives being being discipline specific. I made the rounds of schools. I made the rounds of organizations like LEVA. I made my pitch. I got nowhere with it. Colleges are non-profit in name only, it seems.
To address the problems of time and money, as well as proximity, I've moved the box of classes on-line. The first offering is out now - Statistics for Forensic Analysts. It's micro learning. It's "consumer level," not "egg head" level. It's paced comfortably. It's priced reasonably. It's entirely relevant to the issues raised in the NAS Report, as well as the issues raised in this month's edition of Significance Magazine.
I encourage you to take a look at Statistics For Forensic Analysts. If you've read the latest issue of Significance, and you have no idea what they're talking about or why what they're talking about is vitally important, you need to take our stats class. Check out the syllabus and sign-up today.
Wednesday, April 3, 2019
Why do you need science?
An interesting morning's mail. Two articles released overnight deal with forensic video analysis. Two different angles on the subject.
First off, there's the "advertorial" for the LEVA / IAI certification programs in the Police Chief Magazine.
The pitch for certification was complicated by this image:
The caption for the image further complicated the message for me: "Proper training is required to accurately recover or enhance low-resolution video and images, as well as other visual complexities."
Whilst the statement is true, do you really believe that the improvements to the image happened from the left to the right? Perhaps, for editorial purposes, the image was degraded, from the original (R) to the result (L). If I'm wrong about this, I'd love to see the case notes and the specifics as to the original file. Can you imagine such a result coming from the average CCTV file? Hardly.
Next in the bin was an opinion piece in the Washington Post's Radley Balko - "Journalists need to stop enabling junk forensics." It's seemingly the rebuttal to the LEVA / IAI piece.
Balko picks up where the ProPublica series left off - an examination of the discipline in general, and Dr. Vorder Brugge of the FBI in particular. It's an opinion piece, and it's rather pointed in it's opinion of the state of the discipline. Balko, like ProPublica, has been on this for a while now (here's another Balko piece on the state of forensic science in the US).
I don't disagree with any of the referenced authors here. Not one bit. Jan and Kim are correct in that the Trier of Fact needs competent analysts working cases. Balko is correct in that the US still rather sucks at science. That we suck as science was the main reason the Obama administration created the OSAC and the reason Texas created it's licensing scheme for analysts.
Where I think I disagree with Jan and Kim is essentially a legacy of the Daubert decision. Daubert seemingly outsourced the qualification process to third parties. It gave rise to the certification mills and to industry training programs. Training to competency means different things to different organizations. For example, I've been trained to competency on the use of Amped's FIVE and Authenticate. But, none of that training included the underlying science behind how the tool is used in case work. For that, I had to go elsewhere. But, Amped Software, Inc, certified me as a user and a trainer of the tools. That (certification) was just a step in the journey to competency, not the destination.
Balko, like ProPublica, notes the problems with pattern evidence exams. Their points are valid. But, it doesn't mean that image comparison can't be accomplished. It does mean that image comparisons should be founded in science. One of those sciences is certainly image science (knowing the constituent parts of the image / video and how the evidence item was created, transmitted, stored, retrieved, etc. But another one of the sciences necessary is statistics (and experimental design).
As I noted in my letter to the editor of the Journal of Forensic Identification, experimental design and statistics form a vital part of any analysis. For pattern matching, the evidence item may match the CCTV footage. But, would a representative sample of similar items (shirts, jeans, etc) also match? Can you calculate probabilities if you're unaware of the denominator in the function (what's the population of items)? Did you calculate the sample size properly for the given test? Do you have access to a sample set? If not, did you note these limitations in your report? Did these limitations inform your conclusions?
Both LEVA and the IAI have a requirement for their certified analysts to seek and complete additional training / education towards eventual recertification. This is a good thing. But, as many of us know, there are only so many training opportunities. At some point, you kind of run out of classes to take is a common refrain. Whilst this may be true for "training" (tool / discipline specific), this is so not true for education. There are a ton of classes out there to inform one's work. The problem there becomes price / availability. This price / availability problem was the primary driver behind my taking my Statistics class out of the college context and putting it on-line as micro learning. My other classes from my "curriculum in a box" concept will roll out later this year and into the next year.
So to the point of the morning's articles - yes, you do need a trained / educated analyst. Yes, that analyst needs to engage in a scientific experiment - governed both by image science as well as experimental science. Forensic science can be science, if it's conducted scientifically. Otherwise, it becomes a rhetorical exercise utilizing demonstratives to support it's unreproducible claims.
First off, there's the "advertorial" for the LEVA / IAI certification programs in the Police Chief Magazine.
The pitch for certification was complicated by this image:
The caption for the image further complicated the message for me: "Proper training is required to accurately recover or enhance low-resolution video and images, as well as other visual complexities."
Whilst the statement is true, do you really believe that the improvements to the image happened from the left to the right? Perhaps, for editorial purposes, the image was degraded, from the original (R) to the result (L). If I'm wrong about this, I'd love to see the case notes and the specifics as to the original file. Can you imagine such a result coming from the average CCTV file? Hardly.
Next in the bin was an opinion piece in the Washington Post's Radley Balko - "Journalists need to stop enabling junk forensics." It's seemingly the rebuttal to the LEVA / IAI piece.
Balko picks up where the ProPublica series left off - an examination of the discipline in general, and Dr. Vorder Brugge of the FBI in particular. It's an opinion piece, and it's rather pointed in it's opinion of the state of the discipline. Balko, like ProPublica, has been on this for a while now (here's another Balko piece on the state of forensic science in the US).
I don't disagree with any of the referenced authors here. Not one bit. Jan and Kim are correct in that the Trier of Fact needs competent analysts working cases. Balko is correct in that the US still rather sucks at science. That we suck as science was the main reason the Obama administration created the OSAC and the reason Texas created it's licensing scheme for analysts.
Where I think I disagree with Jan and Kim is essentially a legacy of the Daubert decision. Daubert seemingly outsourced the qualification process to third parties. It gave rise to the certification mills and to industry training programs. Training to competency means different things to different organizations. For example, I've been trained to competency on the use of Amped's FIVE and Authenticate. But, none of that training included the underlying science behind how the tool is used in case work. For that, I had to go elsewhere. But, Amped Software, Inc, certified me as a user and a trainer of the tools. That (certification) was just a step in the journey to competency, not the destination.
Balko, like ProPublica, notes the problems with pattern evidence exams. Their points are valid. But, it doesn't mean that image comparison can't be accomplished. It does mean that image comparisons should be founded in science. One of those sciences is certainly image science (knowing the constituent parts of the image / video and how the evidence item was created, transmitted, stored, retrieved, etc. But another one of the sciences necessary is statistics (and experimental design).
As I noted in my letter to the editor of the Journal of Forensic Identification, experimental design and statistics form a vital part of any analysis. For pattern matching, the evidence item may match the CCTV footage. But, would a representative sample of similar items (shirts, jeans, etc) also match? Can you calculate probabilities if you're unaware of the denominator in the function (what's the population of items)? Did you calculate the sample size properly for the given test? Do you have access to a sample set? If not, did you note these limitations in your report? Did these limitations inform your conclusions?
Both LEVA and the IAI have a requirement for their certified analysts to seek and complete additional training / education towards eventual recertification. This is a good thing. But, as many of us know, there are only so many training opportunities. At some point, you kind of run out of classes to take is a common refrain. Whilst this may be true for "training" (tool / discipline specific), this is so not true for education. There are a ton of classes out there to inform one's work. The problem there becomes price / availability. This price / availability problem was the primary driver behind my taking my Statistics class out of the college context and putting it on-line as micro learning. My other classes from my "curriculum in a box" concept will roll out later this year and into the next year.
So to the point of the morning's articles - yes, you do need a trained / educated analyst. Yes, that analyst needs to engage in a scientific experiment - governed both by image science as well as experimental science. Forensic science can be science, if it's conducted scientifically. Otherwise, it becomes a rhetorical exercise utilizing demonstratives to support it's unreproducible claims.
Tuesday, April 2, 2019
Luminar 3 - First Look
The market for image editors is changing again. Your favorites are now available by subscription only. Some tools haven't been updated in quite some time. Others, are virtually unobtainable for a variety of reasons.
The market is also growing. Folks dumping cell phones and computers just need a simple photo viewer and editor - one that's easy to use and doesn't cost a fortune. You might have heard about the alliance between Cellebrite and Input-Ace. That's cool, but the licenses are a bit pricey and not everyone will have access to those tools all of the time.
Years ago, I ran a very long beta test and proof of concept at the LAPD of what was first called Amped First Responder. Then, with the partnership between Axon and Amped, that product was shifted into the Axon ecosphere and renamed - Axon Investigator. Another round of beta tests. The partnership dissolved. The project went into dormancy. Amped later released Replay. But, the concept is different. Replay isn't an affordable bulk purchase. I wanted a simple editor that was affordable for the average investigator - so everyone could have a copy at their desktop.
My old colleagues need something simple and effective. Something affordable. The latest update of Luminar 3 from Skylum might just be it.
Luminar 3 is a photo editor. It's not for crazy CCTV codecs, or video for that matter. It's a competitor to Photoshop or Lightroom, not PremierePro.
Simple things like focus, colour, sharpening, are all built in and easy to use. To reference the Adobe UI, the tools in Luminar 3 seem like a blend of ACR and Lightroom.
I loaded one of my test files into the program. There's glare from the window. There's a colour cast. The license plate is far away, but there's enough nominal resolution to be able to resolve it. It's a typical task - here's a photo from my mobile, fix it and find the suspects.
30 seconds later, I had clarified the image and saved out a copy. I even took the time to save the settings (known as Looks in Luminar 3).
Luminar 3 is fast, it's user friendly, it's easy to learn, and it's incredibly affordable. Luminar 3 is just $69. Not per month - total. You own it. Plus, your purchase allows up to 5 installs (either Mac or PC). That's a bargain.
I'm going to be exploring some of the features of this handy tool that I think will work for various tasks in the justice system - from the crime scene to the court room - in future posts. In the meantime, it's worth a look. They even offer a free trial.
The market is also growing. Folks dumping cell phones and computers just need a simple photo viewer and editor - one that's easy to use and doesn't cost a fortune. You might have heard about the alliance between Cellebrite and Input-Ace. That's cool, but the licenses are a bit pricey and not everyone will have access to those tools all of the time.
Years ago, I ran a very long beta test and proof of concept at the LAPD of what was first called Amped First Responder. Then, with the partnership between Axon and Amped, that product was shifted into the Axon ecosphere and renamed - Axon Investigator. Another round of beta tests. The partnership dissolved. The project went into dormancy. Amped later released Replay. But, the concept is different. Replay isn't an affordable bulk purchase. I wanted a simple editor that was affordable for the average investigator - so everyone could have a copy at their desktop.
My old colleagues need something simple and effective. Something affordable. The latest update of Luminar 3 from Skylum might just be it.
Luminar 3 is a photo editor. It's not for crazy CCTV codecs, or video for that matter. It's a competitor to Photoshop or Lightroom, not PremierePro.
Simple things like focus, colour, sharpening, are all built in and easy to use. To reference the Adobe UI, the tools in Luminar 3 seem like a blend of ACR and Lightroom.
I loaded one of my test files into the program. There's glare from the window. There's a colour cast. The license plate is far away, but there's enough nominal resolution to be able to resolve it. It's a typical task - here's a photo from my mobile, fix it and find the suspects.
30 seconds later, I had clarified the image and saved out a copy. I even took the time to save the settings (known as Looks in Luminar 3).
Once saved, the Look shows up in the UI under Luminar Looks: User Luminar Looks. You can have Looks for all of your typical tasks - which really speeds things up. Plus, if your saved Look is too strong, you can vary the amount and dial back the effect.
I'm going to be exploring some of the features of this handy tool that I think will work for various tasks in the justice system - from the crime scene to the court room - in future posts. In the meantime, it's worth a look. They even offer a free trial.
Monday, April 1, 2019
My Training Calendar
I've received a few emails about my training schedule. It's over in the Apex Partners web site's calendar (click here).
As of today, here's what's been booked:
April 29-May 1 - Introduction to Forensic Multimedia Analysis (with Amped FIVE).
April 2-3 - Advanced Processing Techniques (with Amped FIVE).
April 29 - May 3 - Book the whole week and save.
May 16-17 Advanced Processing Techniques (emphasis on system engineering / optimization).
July 15-17 - Introduction to Forensic Multimedia Analysis (with Amped FIVE).
July 18-19 - Introduction to Image Authentication (with Amped Authenticate).
July 15-19 - Book the whole week and save.
Dec 9-11 - Introduction to Forensic Multimedia Analysis (with Amped FIVE).
Dec 12-13 - Advanced Processing Techniques (with Amped FIVE).
Dec 9-13 - Book the whole week and save.
But wait, there's more...
Statistics for Forensic Analysts - on-line micro learning. Click here for information or to sign up.
Redaction of Multimedia Evidence - on-line micro learning. If you're an Adobe, Magix, or Amped customer, there's a course dedicated to those tools. If you haven't chosen a toolset yet, there's a course that incorporates all three (plus a bonus module). Click here for more information or to sign up.
If you're interested in booking a training at your location, just let me know.
As of today, here's what's been booked:
April 29-May 1 - Introduction to Forensic Multimedia Analysis (with Amped FIVE).
April 2-3 - Advanced Processing Techniques (with Amped FIVE).
April 29 - May 3 - Book the whole week and save.
May 16-17 Advanced Processing Techniques (emphasis on system engineering / optimization).
July 15-17 - Introduction to Forensic Multimedia Analysis (with Amped FIVE).
July 18-19 - Introduction to Image Authentication (with Amped Authenticate).
July 15-19 - Book the whole week and save.
Dec 9-11 - Introduction to Forensic Multimedia Analysis (with Amped FIVE).
Dec 12-13 - Advanced Processing Techniques (with Amped FIVE).
Dec 9-13 - Book the whole week and save.
But wait, there's more...
Statistics for Forensic Analysts - on-line micro learning. Click here for information or to sign up.
Redaction of Multimedia Evidence - on-line micro learning. If you're an Adobe, Magix, or Amped customer, there's a course dedicated to those tools. If you haven't chosen a toolset yet, there's a course that incorporates all three (plus a bonus module). Click here for more information or to sign up.
If you're interested in booking a training at your location, just let me know.
Subscribe to:
Posts (Atom)