Featured Post

Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...

Friday, December 15, 2017

Sample sizes and determinations of match

It's been a busy fall season, traveling the country and training a whole bunch of folks. Over a lunch, the group I was with asked me about a case that's been in the news and wondered if we'd be discussing how to conduct a comparison of headlight spread patterns. That lead us down quite the rabbit hole ...

Comparative analysis assumes a "known" and compares it to an "unknown." It's important to consider time & temporality - that one can only TEST in the present - in the "now." For the future / past, one can only PREDICT what happened "then." Testing and Prediction have their own rules.

Take the testing of an image / video of a headlight spread pattern. One attempts to compare the "known" vs. a range of possible "unknowns." Our lunch group mentioned a case where the examiner tested about a dozen trucks in front of the CCTV system that generated the evidentiary video in addition to the vehicle in evidence to try to make a determination. The examiner did in fact determine match, as the report indicated.

The question really isn't the appropriateness of the visual comparison. The question is the appropriateness of the sample size such that the results can be useful / trusted. How did the examiner determine an appropriate sample size? Is a dozen trucks appropriate?

Individual head light direction can be adjusted. Headlights come in pairs. Thus, there are two variables that are not on/off. In the world of statistics, they're continuous variables. You're testing two continuous variables against a population of continuous variables to determine uniqueness. Is this possible in real life? What's the appropriate sample size for such a test?

I use a tool called G*Power to calculate sample size. Just about every PhD student does. It's free and quite easy to use once you learn to speak it's language. Most, like me, learn it's language in graduate level stats classes.

For example, if you've determined that an F-Test of Variance - Test of Equality is the appropriate statistical test needed to conduct your experiment, then select that test using G*Power.



Press the Calculate button, and G*Power calculates the appropriate sample size. In this case, the appropriate sample size is 266. There's a huge difference between 266 and a dozen. You can plot the results to track the increase in sample size relative to Power. If you want greater confidence in your results (Power), you need a larger sample size.

The examiner's report should include a section about how the sample size was created and why the test used to calculate it was appropriate. It should have graphics like those below to illustrate the results.


It's vitally important that when conducting a comparative exam and declaring a "match", that the examiner understands the necessary science behind that conclusion. "Match" usually does not mean "a Nissan Sentra." That's not helpful given the quantity of Nissan Sentras a given region. "Match" means "this specific Nissan Sentra." Isn't the standard, "Of all the Nissan Sentras made in that model year whithersoever dispersed around the globe, it's only this particular one and no other?"

What about the test? Did you choose the appropriate test?

What if, on the other hand, you determined that the appropriate test is a T-test like Wilson's sign-ranked test, then the sample size would be different. With that test, the appropriate sample size would be 47. That's still not a dozen.


What happens if you like the T-test and opposing counsel's examiner likes the F-test? What happens when two examiners disagree? Do you have the education, training, and confidence to defend your choice and your results in a Daubert hearing?

Perhaps you've been trained in the basics of conducting a comparative examination. But have you been trained / educated in the science of conducting experiments? Do you know how to choose the appropriate tests for your questions? Do you know how to structure your experiment? Do you know how to calculate the appropriate sample size for your tests?

To wrap up, when concluding that a particular vehicle can't be any other because you've compared the head light spread pattern in a video to several vehicles of the same model / year, it's vitally important to justify the sample size of comparators. You must choose the appropriate test and calculate the sample size based on that test. ASTM 2825-12's requirement that one must produce a report such that another similarly trained / equipped person can reproduce your work means that you must include your notes on the calculation of the sample size. If you haven't done this, you're just guessing and hoping for the best.

Friday, October 27, 2017

Forensically sound

"Is it forensically sound?"

I've heard this question asked many times since I began working in forensic analysis many years ago. Me being me, I wanted to know what it meant to be "forensically sound." Here's what I found as I took a journey through the English language.

"Forensically." The root of this is "forensic."


The root language for the English word "forensic" is the Latin "forensis." It means a public space, a market, or in open court.


Forensis means "of or pertaining to the market or forum." Another way of looking at this can be, activities that happen in the market, forum, public space, or open court.

Ok. We've got "forensically" down. What about "sound."



Sound, from the Old English, means that which is based on reason, sense, or judgement, and/or that which is competent, reliable, or holding acceptable views.

Put together, and given the context of our work, "forensically sound" can mean that activity, related to work for the court / public, which is well founded, reliable, and logical - which is based on reason, sense, or good judgement.

Great, we've now got a working definition. Now how does it apply to our efforts?

In the US, the Judge acts as the "gatekeeper." In providing this "gatekeeper" function, the Judge should weigh the foundation and reliability of the evidence being submitted in the particular case. When questions arise as to science, validity, and/or reliability, either party can ask the Judge for a hearing on the evidence and explore these issues (i.e. Daubert Hearing).

One of the ways that Judges evaluate the work is by comparing the work product to known standards. In our discipline,  we can find standards at the ASTM. For image / video processing, the standard is ASTM 2825. Taking a step back, standards are "must do" and guidelines are "may do."

Thus, if you've followed ASTM 2825 (meaning your work can be repeated), and you use valid and reliable tools, your work is "forensically sound." It's a two part evaluation - you and your tools.

Did you work in a valid, reliable, repeatable, and reproducible way? Are your tools valid and reliable? If the answer is yes to all of these, they your work is forensically sound.

In the many times that I've been asked to evaluate another person's work (i.e. from opposing counsel), this is the standard with which I work. It forms a checklist of sorts.

  • Do I have the same evidence as the opposing side? (i.e. true/exact copy)
  • Is there a written report that conforms to ASTM 2825-12? This assures that I can attempt the tests and thus attempt to reproduce their results. 
That's really it for me. Others may concentrate on training and education and certifications. I really don't. If they aren't trained / educated, it will show in their reporting. To be sure, there are avenues to explore if you have the other person's CV (verify memberships, certifications, education etc.). But, I would hope that folks wouldn't embellish their CV. It's so easy to fact check these days, why lie about something that can be easily discovered via Google?

You have a copy of the evidence and the opposing counsel's report. You attempt to reproduce the results. Two things can happen.
  1. You successfully reproduce the results and come to the same conclusion.
  2. Your results differ from that of the opposing counsel.
If the answer is #1, you're finished. Complete your report and move on. If the answer is #2, can you try to figure out the errors? Your report may include your conclusions as to what went wrong on the other side and why yours is the correct answer. 

I hope this helps...

Wednesday, September 20, 2017

What you know vs. what you can prove

I had an interesting evening. A friend sent a link to a YouTube video, a recorded webinar for a "video analysis" product. I'll admit. I was curious. I watched it. Below is my commentary on what I saw.

The presenter outlined his workflow for working with video from a few different sources. The presentation turned to the difference between how Direct Show handles video playback vs. what's actually in the data stream. The presenter showed how Direct Show may not give you the best version of the data to work with. If you've been around LEVA for a while, you likely know this already. It's good information to know.

Then the presenter did a comparison of a corrected frame from the data stream vs. a frame from the video as played via the Direct Show codec - in Photoshop. He made an interesting statement that prompted me to write this post. He was clicking between the layers so that viewers could "see" that there was a difference between the two frames. The implication was that the viewer could clearly "see" the difference. He was making the point that one frame had more information / a more clear display of the information - illustrating it visually (demonstratively).

This got me thinking - does he know how people "see?"


I'll get to the difference between his workflow (using many tools) and Five (using one tool) in a bit. But first, I want to address his point about "clearly seeing."

I do not hide the fact that I am autistic. Thanks to the changes in the DSM, my original brain wiring diagnoses that were made during the reign of DSM IV now place me firmly on the autism spectrum in DSM V. Sensory Processing Disorder and Prosopagnosia have made life rather interesting; especially growing up in a time when doctors didn't understand these wiring issues at all. As an analyst, they present challenges. But they also present opportunities. Can a person doing manual facial comparison be accused of bias if they're face blind? Not sure. Never been asked. But I digress.

Let's not forget that much of what we do,  the majority of the work involves some sort of visual examination. According to just about every agency's entry rules, examiners must have normal color vision, depth perception, and sufficiently good corrected vision. Vision is acceptable if it is 20/20 or better uncorrected. If vision is uncorrected at 20/80 or better and can be corrected to 20/20 by the use of glasses or hard contacts, it is acceptable. If vision is uncorrected at 20/200 or better and can be corrected to 20/20 by the use of soft contacts, it is acceptable. Vision surgically corrected (such as by radial keratotomy or Lasik) to 20/20 is acceptable once visual acuity has stabilized. All of this is to say, how your mechanical devices (eyes, ears, brain) interact with your environment helps to form what you "know."

I've spent my academic life studying the sensory environment. My PhD dissertation focusses on the college sensory environment that is so hostile, autistic college students would rather drop out than stick it out. But again, I've studied and written extensively on the issue of what people perceive, so the presenter's statement struck me.

It also struck me from the standpoint of what we "know" vs. what we can prove.

The presenter took viewers on quite a tour of a hex editor, GraphStudio, his preferred "workflow engine," and Photoshop before making the statement that prompted this post. A lot of moving parts. Along the way, the story of how information is displayed and why it's important to "know" where differences can occur was driven home.

Yes, we can all agree that there are differences between how Direct Show shows things and how a raw look at the video shows things. It may be helpful to "see" this. But what if you don't perceive the world in the same way as the story teller.

Might there be another way to perform this experiment that doesn't rely on the viewer's perception matching that of the presenter?

Thankfully, with FIVE, there is.

The presenter started with Walmart (SN40) video being displayed via Direct Show. So, I'll start there too. SN40, via Direct Show, displays as 4 CIF.


Then, I used FIVE's conversion engine to copy the stream data into a fresh container.
It displays as 2 CIF.


I selected the same frame in each video and bookmarked them for export.



I brought these images back into FIVE for analysis.

The issue with 2 CIF is that, in general, every other line of resolution isn't actually recorded and needs to be restored via some valid and reliable process. FIVE's Line Doubling filter allows me to restore these lines. I can choose the interpolation method during this process. The presenter in the video chose a linear interpolation method to restore the lines (in Photoshop - not his "workflow engine"), so I did the same.


I've now restored the stream copy frame. I wanted not only to "see" the difference between frames ("what I know"), I wanted to compute the difference between frames ("what I can prove").

Again, staying in FIVE, I linked the the Direct Show frame with the Stream Copy frame with the Video Mixer (found in the Link filter group).


The filter settings for the Video Mixer contains three tabs. The first tab (Inputs) allows the user to choose which processing chains to link, and at what step in the chain.


The second tab (Blend) allows the user to choose what is done with these inputs. In our case, I want to Overlay the two images.


The third tab (Similarity) is where we transition from the demonstrative to the quantitative. Unlike Photoshop's Difference Blend Mode, FIVE doesn't just display a difference (is there a threshold where difference is present but not displayed by your monitor?) it computes similarity metrics.


With the Similarity Metrics enabled, FIVE computes the Sum of Absolute Difference (SAD), the Peak Signal to Noise Ratio (PSNR), Mean Structural Similarity, and the Correlation Coefficient. The actual difference, computed 4 different ways. You don't just "see" or "know" - you prove.


The reporting of this is done at the click of the mouse. FIVE has been keeping track of your processing and the results are complied and produced on demand - no typing your report. (My arthritic fingers thank the programmers each day.)


Reports in the PDF/a standard mean the greatest compatibility when dealing with customers. Click on the hyperlink on the report and read the explanation of what was done, the settings, and the academic/scientific source for the test. This means that FIVE's reports are fully compliant with ASTM's 2825-12. Are Photoshop's reports complaint? What about your "workflow engine? Hint, they are if you type them in such a way as to assure compliance. Who has time for that?

Total time for this experiment was under 5 minutes. I'm sure the presenter could have been faster than was displayed in the webinar, he was explaining things. But, he used a basket of tools - some free and some not free. He also didn't take the viewers time to compile an ASTM 2825-12 compliant report. Given the many tools used, I'm not sure how long that takes him to do.

When considering his proposed workflow, you need to consider the total cost of ownership of the whole basket as well as the cost of training on those tools. You also can factor how much time is spent/saved doing common tasks. I've noted before that prior to having FIVE, I could do about 6 cases per day. With FIVE, I could do about 30 per day. Given the amount of work in Los Angeles, this was huge.

For my test, I used one tool - Amped FIVE. I could do everything the presenter was doing in one tool, and more. I could move from the demonstrative to the quantitative - in the same tool.

Now, to be fair, I am retired now from police service and work full time for Amped Software. OK. But, the information presented here is reproducible. If you have FIVE and some Walmart video, you can do the same thing in this one amazing tool. Because I come from LE, I am always evaluating tools and workflows in terms of total cost of ownership. Money for training and tech is often hard to come by in government service and one wants the best value for the money spent. By this metric, Amped's tools and training offer the best value.

If you want more information about Amped's tools or training, head over to the web site.


Saturday, September 2, 2017

Changing times

I've been in the "video forensics" business for quite some time now. I've seen enough to notice trends in the industry. I've seen people come and go. Today, I want to comment on a coming trend that I believe will impact everyone in the business, LEOs and privateers alike.

Here's what I mean.

Going back to about 2006, the economy was booming and folks were happy. Then 2007 hit and the economy tanked. As belts tightened, people cut back on entertainment and other non-essential things. A result of this was major cut-backs in the movie business. Many out of work editors and producers entered the business of video forensics. They guessed that because of their knowledge of the tools - Avid MC, PremierePro, Final Cut, etc - they could go out there and compete for work, offering their services and "expertise" in video to the courts, attorneys, PIs, and the like. There were few success stories and a lot of colossal fails. Very few of these folks are still around.

Another trend is emerging.

In the push to assure future success, parents have been steering their kids to STEM degrees. Many have pursued and achieved doctorates in the STEM fields only to find that there is a glut of people on the market with such degrees (in my academic field, there's about a 600/1 ratio of applicants to jobs/grants). Some are leaving their degree field, using their expertise in experimental design and statistics (gained by every PhD) in a variety of useful ways (Think Moneyball).

A case* from Arizona last year serves as the canary in the video forensics coal mine. It's a firearms case, but all the issues can easily be applied to our field. In State v Romero (2016), the Arizona Supreme Court said that the trial court erred in not allowing the defense to call their "expert." The person in question wasn't a firearms examiner or a tool-mark examiner. He is an expert in Experimental Design, with a PhD in the discipline.

Here's some relevant parts of the ruling:

"...Dr. Haber was not offered to testify whether Powell had correctly analyzed the toolmarks on the shell casings. Instead, Dr. Haber, based on his expertise in the broader field of experimental design, criticized the scientific reliability of drawing conclusions by comparing tool marks."

"...Arizona Rule of Evidence 702 allows an expert witness to testify if, among other things, the witness is qualified and the expert’s “scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence . . . .” Trial courts serve as the “gatekeepers” of admissibility for expert testimony, with the aim of ensuring such testimony is reliable and helpful to the jury."

Hint, every state court and the US federal courts have a similar rule governing expert witnesses and their testimony.

"... The trial court here concluded that Dr. Haber was not qualified to testify as an expert in firearms identification. In affirming, the court of appeals noted that Dr. Haber, although having reviewed the literature on firearms identification, had not previously been retained as an expert on firearms identification, conducted a toolmark analysis, attempted to identify different firearms, or conducted research on firearms identification. 236 Ariz. at 458 ¶¶ 23-25, 341 P.3d at 500."

"... The issue, however, is not whether Dr. Haber was qualified as an expert in firearms identification, but instead whether he was qualified in the area of his proffered testimony — experimental design. Here, the trial court determined that Powell was qualified to offer an expert opinion that the shell casings were all fired from the same Glock. But Romero did not offer Dr. Haber as an expert in firearms identification to challenge whether Powell had correctly performed his analysis or formed his opinions. Instead, Dr. Haber’s testimony was proffered to help the jury understand how the methods used by firearms examiners in performing toolmark analysis differ from the scientific methods generally employed in designing experiments."

Did you catch that? Dr. Haber was retained to challenge the validity of the method used in the prosecution's examination - to illustrate "... how the methods used by firearms examiners in performing toolmark analysis differ from the scientific methods generally employed in designing experiments."

"... Under Rule 702, when one party offers an expert in a particular field (here, the State’s presentation of Powell as an expert in firearms identification) the opposing party is not restricted to challenging that expert by offering an expert from the same field or with the same qualifications. The trial court should not assess whether the opposing party’s expert is as qualified as — or more convincing than — the other expert. Instead, the court should consider whether the proffered expert is qualified and will offer reliable testimony that is helpful to the jury.  Cf. Bernstein, 237 Ariz. at 230 ¶ 18, 349 P.3d at 204 (noting that when the reliability of an expert’s opinion is a close question, the court should allow the jury to exercise its fact-finding function in assessing the weight and credibility of the evidence)."

"... The gist of Dr. Haber’s proffered testimony was that the methods generally used in conventional toolmark analysis fall short of scientific standards for experimental design. Dr. Haber’s testimony was therefore directed at the scientific weight that should be placed on the results of Powell’s tests. Such questions of weight are emphatically the province of the jury to determine. E.g., State v. Lehr, 201 Ariz. 509, 517 ¶¶ 24–29, 38 P.3d 1172, 1180 (2002). "

"... Apart from Dr. Haber’s qualifications, his testimony would not have been admissible unless it would have been helpful to the jury in understanding the evidence. Ariz. R. Evid. 702(a). The State presented Powell’s testimony that the indentations on shell casings demonstrated that the Glock had fired all the shells, including those at the murder scene, and the State argued that the toolmark comparisons demonstrated a match to “a reasonable degree of scientific certainty.” Dr. Haber’s testimony would have been helpful to the jury in understanding how the toolmark analysis differed from general scientific methods and in evaluating the accuracy of Powell’s conclusions regarding “scientific certainty.”"

"... The thrust of Dr. Haber’s testimony was that the methods underlying toolmark analysis (here comparing indentations and other marks on shell casings) are not based on the scientific method, but instead reflect subjective determinations by the examiner conducting the analysis. Haber would have explained that unlike experts who use other forms of forensic analysis rooted in the scientific method, firearms examiners do not follow an accepted sequential method for evaluating characteristics of fired shell casings and comparing them to control subjects. By describing the methods used by toolmark examiners, Dr. Haber’s testimony could have helped the jury assess how much weight to place on Powell’s “scientific” conclusion that the shell casings at the murder scene could only have been fired from the Glock found by the police when they stopped Romero." How big was the sample size in your experiment? How did you determine the appropriateness of that size? How did the casing's markings compare to a normal distribution of values derived from the sample / control subjects?

"... One of his critiques of the methodology used by firearms examiners is that they do not employ identifiable, standardized protocols." Show me the peer-reviewed, published source that describes the method used.

"... Dr. Haber’s testimony was intended to highlight that the conclusions drawn by firearms examiners from toolmarks do not result from the application of articulable standards and lack typical safeguards of the scientific method such as independent verification by other examiners. Thus, Dr. Haber’s testimony could have helped the jury to understand any eficiencies in the experimental design of toolmark analysis and to assess any suggestion that such analysis was “scientific.” Cf. Salazar-Mercado, 234 Ariz. at 594 ¶ 15, 325 P.3d at 1000 …" Who checked your work and signed-off on it? 

So why such a long post? I saw a video over on Deutsche Welle called "Crime fighting with video forensics." In it, the featured person made this statement: “each vehicle has a unique headlight spread pattern." Does it now? How does he know this? Did he conduct a study? Where is it published? Has he every been asked to prove out his methodology? What was the sample size of the experiment? How was the appropriate size for the sample calculated? How would his "headlight spread pattern" methodology stand up to cross-examination by an attorney prepared by someone with knowledge of experimental design? Remember, there are a lot of out-of-work PhDs out there? What would happen if Dr. Haber was the opposing expert in your case?

The Reddit Bureau of Investigation tackles the subject here. A link from that page contains the following quote, "... all the things your describing sound almost.... Imperfect? I mean, it scares me to think I might get pinned for a crime because I have a similar headlight spread as someone else … So what I'm asking is, are techniques like headlight spread and clothing identification taken very seriously in court? ..." According the the DW story, the matching of the "headlight spread pattern" did lead to a conviction in the highlighted case. The posts are about 5 years old. Plenty of time for someone to actually test this method and publish results - not just post questions on Reddit. But, I can't find any studies in the academic repositories.

Now, I may seem to be picking on one person. I'm not. I'm picking on the use of techniques that are called "science" but have no foundation in any science or the scientific method. I found police-led training on the subject with a simple Google search. Well-meaning folks will be exposed to this topic and begin to use it in their investigations - perhaps unaware of the challenges to it's validity that they may face if/when they testify as to their work.

Errors in conclusions and the use of untested methodologies threaten forensic science. It's not me saying this, it's the focus of the NAS Report. It's the reason the OSAC was created. If you're in the "video forensics" discipline, and you're giving your OPINION about something related to the evidence, PLEASE be sure that your opinion is grounded in valid and reliable science - science that you can quote when asked. For example, if you're using the Rule of Thirds to calculate the height of an unknown subject / object in a CCTV video, you will have problems under a capable cross examination. Where in academics / science can you find a paper that tells you how to employ this method for this purpose? Hint, you can't. If you're using Single View Metrology in your measurements, you'll easily find the source document for this technique as well as the many papers that cite this technique.

And this is where the weakness in many "analysts" work can be found. When giving your opinion, what is the source of your conclusion? Which paper? Which study? How about simply listing your references / sources in your report so there's no confusion as to the basis of your opinions?

My entry into grad school opened my eyes as to what I didn't know and what the various trade groups where I'd received my training couldn't prepare me for. My pathway to my dissertation had me laser focussed on stats, experimental design, sample sizes, validity, and defending my work in front of people who have gone down a similar path and know way more than me. It's humbling to defend one's work - to be cross-examined by such brilliant people. But, iron sharpens iron. I'm the better for it.

Rather than tell you, it'll be OK, I'm saying watch out. You're heading down an unsustainable path. If folks want to continue to use this method - "headlight spread pattern analysis," probability says that there's going to be a challenge. Do you want that to be you? Are you prepared for it?

Something to think about ...

*I'm not an attorney. This is not legal advice. This is not about one person or one case, but the use of untested / un-scientific techniques. Check your six. Relax. Breathe. Love.

Friday, May 26, 2017

Daubert or Frye?


Part of doing the work for Forensic Multimedia Analysis is the eventual testimonial experience. Depending on the state in which you work, or the state in which the case is being heard, different rules will apply.

In my state - Nevada, for example - to the extent that Daubert espouses a flexible approach to the admissibility of expert witness testimony, the Supreme Court of Nevada has held it is persuasive.  Higgs v. State, 222 P.3d 648, 126 Nev. Adv. Rep. 1, 2010 Nev. LEXIS 1 (Nev. 2010).

California, however, is a Frye State. People v. Leahy, 882 P.2d 321 (Cal. 1994). Rejected Daubert standard.

Next week, I'll be teaching in Rhode Island. RI R. Evid. Art. VII, Rule 702 adapted the post-Daubert standards determined by the Supreme Court.

For a quick check of your state's rules, click here.

Thursday, May 25, 2017

FFT for video - yes, video

Eliminating repeating pattern noise from images has been a pain for forensic analysts for quite some time. There are tons of freeware apps and plug-ins ranging in price from inexpensive to cost prohibitive Some of their limitations include: they only work on images, their reliability is spotty, you must document your work without quite knowing how the tool is doing what it's doing.

Imagine having to account for the frequency spikes in each frame of video. This has been the barrier to using FFT tools to fix noise in video. Until now ...

Enter Amped FIVE (Axon Five*). FFT has been a part of the tool set for quite some time. Last year, we added Automatic Selection functionality. Auto Selection Mode, automatically identifies frequencies to remove without user intervention. What's the big deal? VIDEO!!!

Check out the video below to see FFT with Auto Selection Mode in action.



For the most part, it's a one-click fix.



As folks buy cheap CCTV systems meant for an indoor installation and place the cameras and cables outside (in the weather), the components degrade, corrode, and create all sorts of problems for the vide signal. With our tools, including FFT with Auto Selection, you can restore the video accurately. Then, when finished, you'll find that our reporting functionality has kept track of your activities. Just generate the report. The ease of use of our reporting tool is second to none.

*The Axon Forensic Suite tools are powered by Amped Software technologies.

Wednesday, May 24, 2017

Unroll 360 degree camera views with one click

ClickIt DVR interface

Many companies have made the decision to cover the interior of their establishments with as few cameras as possible. They do this with 360 degree camera, big fisheye lenses, or a combination of the two. They get to check the box of having CCTV. You get to try and fix this mess back in the lab.

ClickIt DVR 360 degree view camera

In this case, the ClickIt DVR allows the user to segregate camera views and output individual views as separate files. But, you're still left with that hideous 360 degree camera view.

Not to worry, we've got you covered with Amped FIVE. There's a little gem of a filter that's been in the Edit filter group for a while now. It's called Unroll.


Unroll is one of the many "easy buttons" that are found in Amped Five. Let's take a look at what happens when you activate this powerful filter.


Upon activation of the Unroll filter, the file is unrolled into a panorama. It might be upside down / backwards. Don't worry, that's an easy fix as well. Check out the Flip filter, also in the Edit filter group. In this case, I flipped the file both horizontal and vertical (using the signage in the scene to judge the correct orientation).


In all, from conversion of format to a correctly oriented view, this fix took less than 2 minutes. That is the power of FIVE.

If you'd like more information about tools or training offerings, contact me today.

Tuesday, May 23, 2017

A report formatting tip for FIVE

One of our customers called in to ask about the formatting of the reports that are generated in FIVE. Specially, she wanted to know why FIVE mashed all the sentences together and didn't allow her to format the Description field into paragraphs. I explained that it does. It's done in the Project Properties dialog box. Here's how:

report-formatting

Notice the line break in the text. That's how it's done. If you want a carriage return / line break, add the break at the end of your line. If you want a space in between lines, add one more as shown in the graphic above.

Then your report will look like this:

report-formatting-2

I would usually just add the question to be answered by the file and any scale or other necessary notes.

Hopefully, this tip helps you better organize your report's Description field.

Monday, May 22, 2017

Resourcefulness

Say what you want about him, Tony Robbins knows how to motivate people. Here's one of my favorite Robbins quotes: "Everything you people have told me; I didn't have the technology, I didn't have the right contacts, I didn't have the time, I didn't have the money. Those are resources. And so you're telling me, 'I failed because you didn't have the resources.' And I'm telling you what you already know. Resources are never the problem. It's a lack of resourcefulness. This is why you failed. Creativity, decisiveness, honesty, passion, love, these are human resources. When you engage these resources, you can get any other resources on earth. Resourcefulness is the ultimate human resource. If you don't have what you want, stop telling yourself a story; you don't have the money, you don't have time ... that's BS. It's because you haven't committed yourself where you would burn your boats. If you want to take the island, burn your boats. You will take the island. Because people, when it's either die or succeed, will tend to succeed."

Think about that for a moment.

As conversations in the media revolve around body worn cameras, and police agencies try to budget for not only the cameras but also the storage, private citizens and businesses are generating digital multimedia evidence (digital CCTV systems, mobile devices, on-line sources, and etc.) at a rate of almost 10:1 vs. BWC files. Yes, for every file generated by a body worn camera, roughly 10 digital CCTV files are being generated, retrieved, processed, and stored.

Agencies are investing lots of money around BWC. What about the retrieval, processing, analysis, and storage of digital multimedia evidence? If it accounts for 10x the amount of evidence files, shouldn't it at least get the same funding level as BWC? Sadly, it doesn't.

In my years in LE, I was able to find some very creative ways to kit out my lab. There's money (resources) in a lot of places. You not only have to know where to look, you have to know how to convince the people controlling those funds of the need to share it with you.

Monday, May 15, 2017

In praise of virtual machines and controlled installations of codecs and players

It's been quite a while since I've posted in the this space. I've been over on LinkedIn and the Amped Software blog. I just wanted to take a moment to mention what I've been up to lately.


Last month in the Advanced Processing Techniques class, controlled installation of codecs and players in a virtualized environment was one of the topics of discussion. You really get to see what havoc is wrought against your OS installation by controlling the installation and tracking the changes.

I rather prefer not to have to install players and codecs. That's the beauty of proxy files and Amped DVRConv / Amped FIVE. But when I have to install them, having the ability to work in virtual space is huge. Our licensing model means that the USB dongle can easily be accessed from within the space - making it easy to work with any of the Amped Software tools in virtualized environments.

Next month, I'll be presenting our tools at the Axon Accelerate Conference. You can click on the link to register. It's going to be a great event. Then, it's back to the office for more training sessions.

Enjoy.