Noritsu, the massive Japanese photo processing company, is making the jump to the US forensics market with their AccuSmart Vision Forensic Video Processing (ASV) product. I haven't gotten a hold of a copy yet, but it should be interesting to see what they're up to.
No idea on pricing or support, but it might be worth a look. Stay tuned.
Featured Post
Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...
Tuesday, July 30, 2013
Monday, July 29, 2013
Video playback resolution in Photoshop CC
This just in from @julieannekost: "Photoshop CC has added various resolutions to playback video. The default is 50% and other options are 25% and 100%. Setting a lower resolution can increase playback performance when working with high resolution video (in previous versions, Photoshop auto-dropped the resolution for faster playback). To select the playback resolution, click the gear icon in the Timeline panel. Note: the Loop Playback check box also appears in the gear menu."
Friday, July 26, 2013
Falsification: looking for evidence to disprove the theory
As we often times get asked questions in court that are posed in the context of research science, many are confused by the term Falsification. In the context of research science, it means looking for evidence to disprove the theory.
De Vaus notes, "As well as evaluating and eliminating alternative explanations we should rigorously evaluate our own theories. Rather than asking 'What evidence would constitute support for the theory?', ask 'What evidence would convince me that the theory is wrong?' It is not difficult to find evidence consistent with a theory. It is much tougher for a theory to survive the test of people trying to disprove it." (De Vaus, 2001).
"... falsificationism stresses the ambiguity of confirmation . . . corroboration gives only the comfort that the theory has been tested and survived the test, that even after the most impressive corroborations of predictions it has only achieved the status of 'not yet disconfirmed'. This . . . is far from the status of 'being true'." (Cook and Campbell, 1979)
If you are offering "technician level" testimony, I performed this workflow using these tools ... and offering the court no opinion, you'll likely not get any questions related to falsification. But, if you are offing your opinion, your theory, then be prepared for this line of enquiry. Being prepared starts with knowing what they're asking.
De Vaus notes, "As well as evaluating and eliminating alternative explanations we should rigorously evaluate our own theories. Rather than asking 'What evidence would constitute support for the theory?', ask 'What evidence would convince me that the theory is wrong?' It is not difficult to find evidence consistent with a theory. It is much tougher for a theory to survive the test of people trying to disprove it." (De Vaus, 2001).
"... falsificationism stresses the ambiguity of confirmation . . . corroboration gives only the comfort that the theory has been tested and survived the test, that even after the most impressive corroborations of predictions it has only achieved the status of 'not yet disconfirmed'. This . . . is far from the status of 'being true'." (Cook and Campbell, 1979)
If you are offering "technician level" testimony, I performed this workflow using these tools ... and offering the court no opinion, you'll likely not get any questions related to falsification. But, if you are offing your opinion, your theory, then be prepared for this line of enquiry. Being prepared starts with knowing what they're asking.
Thursday, July 25, 2013
Seriously!? Adobe
I get about 5 of these e-mails per day. I'm already a Creative Cloud customer. Seriously Adobe, there's got to be an internal mechanism that says, OK, he's already a subscriber, maybe he doesn't need to get the subscription spam. Seriously?!
Wednesday, July 24, 2013
Adobe updates
I woke up to find a ton of updates waiting for me in my Creative Cloud notifier. Lots of bug fixes, really taxing my 6GB 4G monthly allotment. Unfortunately, I'm one of the 20% of rural Americans without high-speed wired internet. I get my internet through my Verizon handset at 4G. I'm starting to think that maybe Verizon owns Adobe stock, or the other way around. Hmmm
I'm liking the improvements to the CC products enough, I just wish I could download the patches elsewhere, save them to a thumb drive, then install them when I get home. Hmmm.
I'm liking the improvements to the CC products enough, I just wish I could download the patches elsewhere, save them to a thumb drive, then install them when I get home. Hmmm.
Tuesday, July 23, 2013
Public service careers and the mess in Detroit
I've been watching the various news reports on the demise of Detroit. It's very depressing.
I've got family in Michigan. I've got family from Michigan. I haven't set foot in the state in over 20 years, but I'm still a die-hard Wings fan. I go to see them play the Kings or the Ducks when they come to town. But, as you can see from the photo, as a civil servant, I can't afford to sit near the ice.
As a civil servant, there's only two things I can say about how much I make per hour, my benefits, and my pension - yes or no. I can take what's offered, or I can quit. In a closed government shop, I have no say. I like what I do, so I stay ... even as I make less than I did 10 years ago with all the give-backs and extra towards health and pension, I'm glad that I'm employed so I count my blessings and pray for those in worse circumstances.
With that in mind, I feel a bit angry at the news reports that say the government workers are to blame for sucking Detroit down a hole. Folks had nothing to do with that. The job opening was posted, they applied, the government union said what the pay scale was and how much they'd get in pension and health ... then they worked their careers. You can argue over individual slugs and super stars, but the workers don't set pay and benefit rules.
Since I don't live in the city that employs me, I don't get to vote for the people who set my wage. I think its the same way for a lot of government employees. So, I'm left with two choices: stay or go. If I go, someone else will take the job. Government won't shrink.
So what does this say about me, government workers in general, and Detroit (or your city)? The city where I am employed has such a positive balance in their portfolio (as reflected in their recent CAFR) that they could actually buy Detroit's entire debt burden and have about $3 billion left over. Why is my city doing so well and Detroit doing so poorly?
Something to think about as I change the channel to something more positive, like Extreme Amish Cougars on NatHistGeo HD.
I've got family in Michigan. I've got family from Michigan. I haven't set foot in the state in over 20 years, but I'm still a die-hard Wings fan. I go to see them play the Kings or the Ducks when they come to town. But, as you can see from the photo, as a civil servant, I can't afford to sit near the ice.
As a civil servant, there's only two things I can say about how much I make per hour, my benefits, and my pension - yes or no. I can take what's offered, or I can quit. In a closed government shop, I have no say. I like what I do, so I stay ... even as I make less than I did 10 years ago with all the give-backs and extra towards health and pension, I'm glad that I'm employed so I count my blessings and pray for those in worse circumstances.
With that in mind, I feel a bit angry at the news reports that say the government workers are to blame for sucking Detroit down a hole. Folks had nothing to do with that. The job opening was posted, they applied, the government union said what the pay scale was and how much they'd get in pension and health ... then they worked their careers. You can argue over individual slugs and super stars, but the workers don't set pay and benefit rules.
Since I don't live in the city that employs me, I don't get to vote for the people who set my wage. I think its the same way for a lot of government employees. So, I'm left with two choices: stay or go. If I go, someone else will take the job. Government won't shrink.
So what does this say about me, government workers in general, and Detroit (or your city)? The city where I am employed has such a positive balance in their portfolio (as reflected in their recent CAFR) that they could actually buy Detroit's entire debt burden and have about $3 billion left over. Why is my city doing so well and Detroit doing so poorly?
Something to think about as I change the channel to something more positive, like Extreme Amish Cougars on NatHistGeo HD.
Monday, July 22, 2013
Print Screen
There are occasions when you'll need to do a screen grab of your workstation's screen. You hit the Print Screen key on the keyboard, then paste the contents of your memory in Paint, Photoshop, and etc. For Amped FIVE users, it's there too.
After pressing the Print Screen key, select Image Paster. Image Paster just places the contents of the memory capture into a new chain. You can select the compression level in JPEG, or choose another method altogether, in the Image Paster control box.
I know that you can do all of this in Paint or in Photoshop, but a reader asked about Image Paster, so ...
Enjoy.
After pressing the Print Screen key, select Image Paster. Image Paster just places the contents of the memory capture into a new chain. You can select the compression level in JPEG, or choose another method altogether, in the Image Paster control box.
I know that you can do all of this in Paint or in Photoshop, but a reader asked about Image Paster, so ...
Enjoy.
Friday, July 19, 2013
Ahh ... the good old days
Amazingly enough, PC prices are at historic lows ... by my MacBook Pro cost almost exactly the same as my first Macintosh.
You might recognize the program on the screen. Aldus PageMaker rocked.
Verifeyed still not verified
About a year ago, I tested Verifeyed and found that it wasn't an appropriate solution for authenticating images from social media (see this post).
Since I last tested Verifeyed, they've changed it up a bit ... but not much.
Probably as a response to a discussion in a group at LinkedIn, I was offered a chance to try Verifeyed again (though I was told that this was/is a rare event as they don't have trial version of their software).
After a frustrating week of finding that not much had changed other than the UI, I sent them some of my test images to authenticate. As before, they were from my Facebook page. Here's the response from their support team: "To summarize, Verifeyed is not an appropriate software to analyze photos of the kind you shared with us. We recommend to use Verifeyed only on photos captured by today’s cameras."
Thus, given Verifeyed's high price, the things it actually does, and the lack of a trial version, I would say that FourMatch is a better choice for trying to authenticate images that come directly from "today's cameras." FourMatch is about half the price of Verifeyed, runs within Photoshop (which you should already own/rent), and they offer a trial.
Since I last tested Verifeyed, they've changed it up a bit ... but not much.
Probably as a response to a discussion in a group at LinkedIn, I was offered a chance to try Verifeyed again (though I was told that this was/is a rare event as they don't have trial version of their software).
After a frustrating week of finding that not much had changed other than the UI, I sent them some of my test images to authenticate. As before, they were from my Facebook page. Here's the response from their support team: "To summarize, Verifeyed is not an appropriate software to analyze photos of the kind you shared with us. We recommend to use Verifeyed only on photos captured by today’s cameras."
Thus, given Verifeyed's high price, the things it actually does, and the lack of a trial version, I would say that FourMatch is a better choice for trying to authenticate images that come directly from "today's cameras." FourMatch is about half the price of Verifeyed, runs within Photoshop (which you should already own/rent), and they offer a trial.
Thursday, July 18, 2013
Forensic Video Acquisition Field Kit?
So, I get an e-mail from my friends at Ocean Systems announcing their "Forensic Video Field Acquisition Kit." From reading the piece, I'm a bit worried that inexperienced folks and uninformed command staff might be confused as to what this is and what this isn't.
First of all, what is it? It's a nicely put together kit for your extremely last resort option of grabbing digital evidence out of the analog video port of a DVR. Yes, I had to say it. This is not a option of first choice, but one done when all else fails.
The Best Practices for the Retrieval of Video Evidence from Digital CCTV Systems (DCCTV Guide), also known as the red flip book, notes on page 1 that you should be after the data - "the retrieved video data should be retained as the master evidence." Note the word, data. This kit is designed/sold to grab signal, not data. (I guess you could use it to network to the DVR, but that isn't listed as a feature and you could just as easily buy a cheap laptop and a cross-over cable on Amazon to do that part).
Remember, DCCTV retrieval is the collection of relevant video data and associated metadata from a digital video recording system. In every class that I've ever taught on the subject of the retrieval of DCCTV evidence, I've focussed on securing the data and metadata as the master evidence. Again, this kit grabs signal, not data / metadata.
Page 25 of the red flip book notes that not all DVRs have a digital output option. For these systems, the book advises the investigator to consider seizing the DVR as the master evidence. If that's not possible, it then advises an analog signal capture as a last resort option. I know that the book says it, but why is seizing the box important? If I have a DVR with no digital output options, and I have the DVR in my lab, I can do whatever it is that I do for the case. I can turn the DVR over to opposing council's expert, and they can do whatever it is that they do - from the same piece of master evidence. With this kit's output, the opposing expert can only work from my output, not from the original evidence (with the DVR in continued service, it's likely that the original data will be long gone before trial ever starts).
Additionally, the statement "Now first responders and experienced analysts alike can go on scene with confidence that they will walk away with an uncompressed copy of the evidence they need to investigate the case" is a bit misleading. You aren't getting a copy in the classical sense of the word. If you were, you'd get a copy of the data - but you aren't. The evidence is the data and the metadata. The signal is a nice picture of the data, but it's just that, a visual representation of the data. It's like taking a picture of a pistol and trying to do ballistics.
Added to that is the fact that it's a less than real time option. Yes, I said it. It's less than real time. You have to play it back in real time, then wait for Omnivore to save it in its native Omnivore format, then convert it to something usable for your editor. So, it's less than real time - not "instantly previewed, saved or exported to common formats" as the marketing page states. What happens if you have multiple cameras, or multiple hours to capture? Is this something that you want to do in the field, or would you consider seizing the DVR and doing this back in the lab? In cases where I can get the data, that's my preference. If the data/player doesn't support the creation of a file that my attorney needs (.avi, .wmv, .mov) for his/her presentation, I can use my Omnivore to speed that along. But, I still have the original data. In many cases, like with .264 or .re4 files, I can quickly convert the proprietary files with FIVE and thus skip the real time Omnivore option (though I like the Omnivore's frame speed detection better for doing screen captures).
Another worry is that folks will use this, but not understand the underlying technology behind what's going on in this digital to analog to digital conversion. Remember, this is being marketed to First Responders. In the world of Melendez-Diaz, the accused can/will call the first responder to quiz them on their decision matrix, SOPs, choice of tools, familiarity with the tools/tech, and etc. Is this something that your command staff wants to have happen? Let's say that the recording was done in the DVR at 720x240 and the frame rate was variable ... but around 7 frames per second. Your linux based DVR has only a VGA output and you capture it using this system. How does the DVR put 720x240 into VGA and come out even? Was anything lost? What was lost? Was your process validated and is it reliable? With the many recorded frame sizes out there, are you comfortable with your first responders facing these questions under cross? I know a few "experts" who would have trouble articulating the answers in a way that the Trier of Fact would understand.
So, as an Omnivore owner, would I consider this an upgrade? I'm not sure. It seems like a logical update to the StarWitness Field Agent concept. Knowing the folks involved in making it, I'm confident that it will work well for what it actually does. But, I just wince at the marketing of it and the way it's portrayed. But, that's my unvarnished opinion.
First of all, what is it? It's a nicely put together kit for your extremely last resort option of grabbing digital evidence out of the analog video port of a DVR. Yes, I had to say it. This is not a option of first choice, but one done when all else fails.
The Best Practices for the Retrieval of Video Evidence from Digital CCTV Systems (DCCTV Guide), also known as the red flip book, notes on page 1 that you should be after the data - "the retrieved video data should be retained as the master evidence." Note the word, data. This kit is designed/sold to grab signal, not data. (I guess you could use it to network to the DVR, but that isn't listed as a feature and you could just as easily buy a cheap laptop and a cross-over cable on Amazon to do that part).
Remember, DCCTV retrieval is the collection of relevant video data and associated metadata from a digital video recording system. In every class that I've ever taught on the subject of the retrieval of DCCTV evidence, I've focussed on securing the data and metadata as the master evidence. Again, this kit grabs signal, not data / metadata.
Page 25 of the red flip book notes that not all DVRs have a digital output option. For these systems, the book advises the investigator to consider seizing the DVR as the master evidence. If that's not possible, it then advises an analog signal capture as a last resort option. I know that the book says it, but why is seizing the box important? If I have a DVR with no digital output options, and I have the DVR in my lab, I can do whatever it is that I do for the case. I can turn the DVR over to opposing council's expert, and they can do whatever it is that they do - from the same piece of master evidence. With this kit's output, the opposing expert can only work from my output, not from the original evidence (with the DVR in continued service, it's likely that the original data will be long gone before trial ever starts).
Additionally, the statement "Now first responders and experienced analysts alike can go on scene with confidence that they will walk away with an uncompressed copy of the evidence they need to investigate the case" is a bit misleading. You aren't getting a copy in the classical sense of the word. If you were, you'd get a copy of the data - but you aren't. The evidence is the data and the metadata. The signal is a nice picture of the data, but it's just that, a visual representation of the data. It's like taking a picture of a pistol and trying to do ballistics.
Added to that is the fact that it's a less than real time option. Yes, I said it. It's less than real time. You have to play it back in real time, then wait for Omnivore to save it in its native Omnivore format, then convert it to something usable for your editor. So, it's less than real time - not "instantly previewed, saved or exported to common formats" as the marketing page states. What happens if you have multiple cameras, or multiple hours to capture? Is this something that you want to do in the field, or would you consider seizing the DVR and doing this back in the lab? In cases where I can get the data, that's my preference. If the data/player doesn't support the creation of a file that my attorney needs (.avi, .wmv, .mov) for his/her presentation, I can use my Omnivore to speed that along. But, I still have the original data. In many cases, like with .264 or .re4 files, I can quickly convert the proprietary files with FIVE and thus skip the real time Omnivore option (though I like the Omnivore's frame speed detection better for doing screen captures).
Another worry is that folks will use this, but not understand the underlying technology behind what's going on in this digital to analog to digital conversion. Remember, this is being marketed to First Responders. In the world of Melendez-Diaz, the accused can/will call the first responder to quiz them on their decision matrix, SOPs, choice of tools, familiarity with the tools/tech, and etc. Is this something that your command staff wants to have happen? Let's say that the recording was done in the DVR at 720x240 and the frame rate was variable ... but around 7 frames per second. Your linux based DVR has only a VGA output and you capture it using this system. How does the DVR put 720x240 into VGA and come out even? Was anything lost? What was lost? Was your process validated and is it reliable? With the many recorded frame sizes out there, are you comfortable with your first responders facing these questions under cross? I know a few "experts" who would have trouble articulating the answers in a way that the Trier of Fact would understand.
So, as an Omnivore owner, would I consider this an upgrade? I'm not sure. It seems like a logical update to the StarWitness Field Agent concept. Knowing the folks involved in making it, I'm confident that it will work well for what it actually does. But, I just wince at the marketing of it and the way it's portrayed. But, that's my unvarnished opinion.
Wednesday, July 17, 2013
How accurate is forensic analysis?
This Washington Post article highlights a review of death penalty convictions "... in which FBI forensic experts may have mistakenly linked defendants to crimes with exaggerated scientific testimony ..."
Within the article, there's an interactive info-graphic on the reliability of certain "forensic analysis" domains. "Body changes registered by polygraph equipment can be subjective to interpret, caused by anxiety rather than guilt." "A 2003 National Academy of Sciences panel found polygraph testing lacks sufficient scientific validity and accuracy to justify its use in screening federal employees but useful as an investigative tool. Several federal circuit and state courts deem polygraph evidence inadmissible." In spite of this, many LE agencies continue to use the Polygraph exam to screen recruits and internal transfers to sensitive assignments. One is left to wonder how many false positives have prevented otherwise fit candidates from gaining employment in LE. I know from my own experiences with the polygraph, wondering that since the attachments don't fit on my larger than average body, and I barely fit in the special chair ... how can the results be accurate or valid, much less reliable and repeatable.
Within the article, there's an interactive info-graphic on the reliability of certain "forensic analysis" domains. "Body changes registered by polygraph equipment can be subjective to interpret, caused by anxiety rather than guilt." "A 2003 National Academy of Sciences panel found polygraph testing lacks sufficient scientific validity and accuracy to justify its use in screening federal employees but useful as an investigative tool. Several federal circuit and state courts deem polygraph evidence inadmissible." In spite of this, many LE agencies continue to use the Polygraph exam to screen recruits and internal transfers to sensitive assignments. One is left to wonder how many false positives have prevented otherwise fit candidates from gaining employment in LE. I know from my own experiences with the polygraph, wondering that since the attachments don't fit on my larger than average body, and I barely fit in the special chair ... how can the results be accurate or valid, much less reliable and repeatable.
Tuesday, July 16, 2013
Monday, July 15, 2013
DAC acquires Salient Stills
While I usually refrain from publishing press releases, this one gets an exception.
BOSTON, July 15, 2013 – Salient Stills, a leader in video forensics and image enhancement, today announced it has been acquired by Digital Audio Corporation (DAC), a leading supplier of forensic audio clarification tools. DAC will sell, support and offer training for Salient Stills’ VideoFOCUS PRO and VFSource systems. Salient Stills CEO and President Laura Teodosio and CTO Jeff Hunter will join DAC, and continue their market and product development.
“We’ve been considering strategic acquisitions for several years, and Salient Stills is the perfect complement to our existing products,” said Donald Tunstall, President and General Manager of DAC. “Together, we can more seamlessly meet the unique video and audio forensics challenges of our shared customer base in the law enforcement, homeland security, and defense sectors.”
“DAC is the ideal home for our technology and customers, and this acquisition ensures that our products will continue to evolve,” said Teodosio, of Salient Stills. “The combination of our forensics technologies also enables DAC to offer the industry’s most advanced and comprehensive toolset available.”
VideoFOCUS Pro captures and exports video from videotape, CCTV systems, digital video cameras, cell phones and proprietary DVR formats, and generates higher resolution stills and videos. DAC’s CARDINAL forensic audio processor seamlessly handles forensic analysis and processing of all analog and digital media. DAC will be exhibiting these products in booth #812 at the NATIA conference this week in Memphis, TN.
BOSTON, July 15, 2013 – Salient Stills, a leader in video forensics and image enhancement, today announced it has been acquired by Digital Audio Corporation (DAC), a leading supplier of forensic audio clarification tools. DAC will sell, support and offer training for Salient Stills’ VideoFOCUS PRO and VFSource systems. Salient Stills CEO and President Laura Teodosio and CTO Jeff Hunter will join DAC, and continue their market and product development.
“We’ve been considering strategic acquisitions for several years, and Salient Stills is the perfect complement to our existing products,” said Donald Tunstall, President and General Manager of DAC. “Together, we can more seamlessly meet the unique video and audio forensics challenges of our shared customer base in the law enforcement, homeland security, and defense sectors.”
“DAC is the ideal home for our technology and customers, and this acquisition ensures that our products will continue to evolve,” said Teodosio, of Salient Stills. “The combination of our forensics technologies also enables DAC to offer the industry’s most advanced and comprehensive toolset available.”
VideoFOCUS Pro captures and exports video from videotape, CCTV systems, digital video cameras, cell phones and proprietary DVR formats, and generates higher resolution stills and videos. DAC’s CARDINAL forensic audio processor seamlessly handles forensic analysis and processing of all analog and digital media. DAC will be exhibiting these products in booth #812 at the NATIA conference this week in Memphis, TN.
Friday, July 12, 2013
Reliability and validity
With all the chatter on the Zimmerman trial and the exclusion of the State's voice "experts," I wanted to take a moment to look at reliability and validity from a science standpoint. To do this, I'll enlist the help of one of my favorite authors on the subject, William M.K. Trochim, Professor in the Department of Policy Analysis and Management at Cornell University.
"Reliability has to do with the quality of measurement. In its everyday sense, reliability is the "consistency" or "repeatability" of your measures. Before we can define reliability precisely we have to lay the groundwork. First, you have to learn about the foundation of reliability, the true score theory of measurement. Along with that, you need to understand the different types of measurement error because errors in measures play a key role in degrading reliability. With this foundation, you can consider the basic theory of reliability, including a precise definition of reliability. There you will find out that we cannot calculate reliability -- we can only estimate it. Because of this, there a variety of different types of reliability that each have multiple ways to estimate reliability for that type. In the end, it's important to integrate the idea of reliability with the other major criteria for the quality of measurement -- validity -- and develop an understanding of the relationships between reliability and validity in measurement.
We often think of reliability and validity as separate ideas but, in fact, they're related to each other. Here, I want to show you two ways you can think about their relationship.
One of my favorite metaphors for the relationship between reliability is that of the target. Think of the center of the target as the concept that you are trying to measure. Imagine that for each person you are measuring, you are taking a shot at the target. If you measure the concept perfectly for a person, you are hitting the center of the target. If you don't, you are missing the center. The more you are off for that person, the further you are from the center.
The figure above shows four possible situations. In the first one, you are hitting the target consistently, but you are missing the center of the target. That is, you are consistently and systematically measuring the wrong value for all respondents. This measure is reliable, but no valid (that is, it's consistent but wrong). The second, shows hits that are randomly spread across the target. You seldom hit the center of the target but, on average, you are getting the right answer for the group (but not very well for individuals). In this case, you get a valid group estimate, but you are inconsistent. Here, you can clearly see that reliability is directly related to the variability of your measure. The third scenario shows a case where your hits are spread across the target and you are consistently missing the center. Your measure in this case is neither reliable nor valid. Finally, we see the "Robin Hood" scenario -- you consistently hit the center of the target. Your measure is both reliable and valid (I bet you never thought of Robin Hood in those terms before)."
So, yes, I can hear you now ... but you're talking about social research, like the stuff that you're doing for your PhD. Ok, sure. But think about it for a second. You call yourself an image analyst or a video analyst. Those fields have specific domains. Those domains involve measurements of one type or another. So, I ask you, are we really talking about two different things? Or, does the scientific method work across a number of academic and professional disciplines? I would argue that it does.
"Reliability has to do with the quality of measurement. In its everyday sense, reliability is the "consistency" or "repeatability" of your measures. Before we can define reliability precisely we have to lay the groundwork. First, you have to learn about the foundation of reliability, the true score theory of measurement. Along with that, you need to understand the different types of measurement error because errors in measures play a key role in degrading reliability. With this foundation, you can consider the basic theory of reliability, including a precise definition of reliability. There you will find out that we cannot calculate reliability -- we can only estimate it. Because of this, there a variety of different types of reliability that each have multiple ways to estimate reliability for that type. In the end, it's important to integrate the idea of reliability with the other major criteria for the quality of measurement -- validity -- and develop an understanding of the relationships between reliability and validity in measurement.
We often think of reliability and validity as separate ideas but, in fact, they're related to each other. Here, I want to show you two ways you can think about their relationship.
One of my favorite metaphors for the relationship between reliability is that of the target. Think of the center of the target as the concept that you are trying to measure. Imagine that for each person you are measuring, you are taking a shot at the target. If you measure the concept perfectly for a person, you are hitting the center of the target. If you don't, you are missing the center. The more you are off for that person, the further you are from the center.
The figure above shows four possible situations. In the first one, you are hitting the target consistently, but you are missing the center of the target. That is, you are consistently and systematically measuring the wrong value for all respondents. This measure is reliable, but no valid (that is, it's consistent but wrong). The second, shows hits that are randomly spread across the target. You seldom hit the center of the target but, on average, you are getting the right answer for the group (but not very well for individuals). In this case, you get a valid group estimate, but you are inconsistent. Here, you can clearly see that reliability is directly related to the variability of your measure. The third scenario shows a case where your hits are spread across the target and you are consistently missing the center. Your measure in this case is neither reliable nor valid. Finally, we see the "Robin Hood" scenario -- you consistently hit the center of the target. Your measure is both reliable and valid (I bet you never thought of Robin Hood in those terms before)."
So, yes, I can hear you now ... but you're talking about social research, like the stuff that you're doing for your PhD. Ok, sure. But think about it for a second. You call yourself an image analyst or a video analyst. Those fields have specific domains. Those domains involve measurements of one type or another. So, I ask you, are we really talking about two different things? Or, does the scientific method work across a number of academic and professional disciplines? I would argue that it does.
Thursday, July 11, 2013
The inverse CSI effect in the age of digital crime
Here's an interesting take on the CSI effect: "The "CSI Effect" has been described as being an increased expectation from jurors that forensic evidence will be presented in court that is instantaneous and unequivocal because that is how it is often presented for dramatic effect in television programs and movies. Of course, in reality forensic science, while exact in some respects is just as susceptible to the vagaries of measurements and analyses as any other part of science. In reality, crime scene investigators often spend seemingly inordinate amounts of time gathering and assessing evidence and then present it as probabilities rather than the kind of definitive result expected of a court room filled with actors rather than real people.
However, while suggesting this CSI Effect is perhaps not quite as widespread as one might imagine among jurors, informatician Richard Overill of King's College London believes it might have a positive effect on reducing the tendency to criminal behaviour among some individuals. He offers details of his analysis of the "Inverse CSI Effect" in a forthcoming issue of the International Journal of Electronic Security and Digital Forensics. This would be manifest, he says, particularly among so-called cyber-criminals, fearing the instantaneous and definitive forensic evidence from the imagined cyber-sleuths.
If this inverse CSI effect exists then one might imagine that a proportion of cyber-criminals would modify their behaviour in one of three ways. They might go straight by withdrawing from their nefarious activities altogether. They might attempt to go "under the radar", restricting their crimes to ones with lower impact and less "profit" that would not necessarily warrant costly police resources for investigation. Alternatively, they might expend large amounts of effort or money to obfuscate their modus operandi with multiple layers of concealment and stealth to make their crimes invisible to even the slyest cyber sleuth ..."
Continue reading by clicking here.
However, while suggesting this CSI Effect is perhaps not quite as widespread as one might imagine among jurors, informatician Richard Overill of King's College London believes it might have a positive effect on reducing the tendency to criminal behaviour among some individuals. He offers details of his analysis of the "Inverse CSI Effect" in a forthcoming issue of the International Journal of Electronic Security and Digital Forensics. This would be manifest, he says, particularly among so-called cyber-criminals, fearing the instantaneous and definitive forensic evidence from the imagined cyber-sleuths.
If this inverse CSI effect exists then one might imagine that a proportion of cyber-criminals would modify their behaviour in one of three ways. They might go straight by withdrawing from their nefarious activities altogether. They might attempt to go "under the radar", restricting their crimes to ones with lower impact and less "profit" that would not necessarily warrant costly police resources for investigation. Alternatively, they might expend large amounts of effort or money to obfuscate their modus operandi with multiple layers of concealment and stealth to make their crimes invisible to even the slyest cyber sleuth ..."
Continue reading by clicking here.
Wednesday, July 10, 2013
Samsung RE4 and VFS4 Files
Spreadys Space goes into incredible depth on the .re4 / vsf4 issue. Check in out by clicking here.
Tuesday, July 9, 2013
Florida Says "Good-Bye Frye" & "Hello Daubert"
This just in from the BullsEye blog: "Say hello to a whole new expert era in Florida. After nearly a century of utilizing the Frye standard to evaluate the admissibility of expert witness testimony, on July 1 a new law adopting the federal Daubert standard came into effect across Florida state courts.
The July 1 date is a momentous one for those who rely on the use of expert witness testimony in Florida state courts, as it signals a major evidentiary shift related to the use and admissibility of expert testimony in Florida. Signed into law on June 5th by Gov. Rick Scott as HB 7015, the law officially abolishes the long-standing Frye standard in Florida, replacing it with the federal court’s more rigorous Daubert standard.
Florida’s adoption of the federal Daubert standard for admissibility means big expert witness changes for that state are on the horizon. Implementation of the Daubert standard will impose greater hurdles for the admission of expert testimony in Florida state courts, affecting attorneys, experts, parties, and even judges ..."
The July 1 date is a momentous one for those who rely on the use of expert witness testimony in Florida state courts, as it signals a major evidentiary shift related to the use and admissibility of expert testimony in Florida. Signed into law on June 5th by Gov. Rick Scott as HB 7015, the law officially abolishes the long-standing Frye standard in Florida, replacing it with the federal court’s more rigorous Daubert standard.
Florida’s adoption of the federal Daubert standard for admissibility means big expert witness changes for that state are on the horizon. Implementation of the Daubert standard will impose greater hurdles for the admission of expert testimony in Florida state courts, affecting attorneys, experts, parties, and even judges ..."
Monday, July 8, 2013
re4 file problems solved
So it's back to work after an extended holiday. First off, a disc with an .re4 file. Thanks to Larry, I can get the link to download a player ... but the player offers no export other than still images. My options? Omnivore, which is a slower than real time solution for this particular file's 16 cameras. Or, Amped FIVE.
FIVE opened the file using the DVR Change Container to AVI function. It's now just a big multplexed AVI file. Then, a quick demux, select the appropriate camera, trim to select just the part of the scene containing the relevant footage, and output to AVI. Once the footage has been demuxed, it's as easy as selecting the next relevant camera view, trim, and output. All in all, a very fast option indeed.
So, for those files that use "normal" video codecs wrapped in a proprietary container, FIVE is the fast option for getting a usable clip.
Enjoy.
FIVE opened the file using the DVR Change Container to AVI function. It's now just a big multplexed AVI file. Then, a quick demux, select the appropriate camera, trim to select just the part of the scene containing the relevant footage, and output to AVI. Once the footage has been demuxed, it's as easy as selecting the next relevant camera view, trim, and output. All in all, a very fast option indeed.
So, for those files that use "normal" video codecs wrapped in a proprietary container, FIVE is the fast option for getting a usable clip.
Enjoy.
Subscribe to:
Posts (Atom)