This came in via the Yahoo forensic audio group (Thanks to Douglas Lacey): "URI DFC's Video Previewer is a free application that quickly processes a video and shows its key frames in a PDF file. It is particularly useful in investigations where watching a video is time consuming. It allows specification to select frames at equally spaced intervals, or to perform intelligent selection of frames based on scene changes. The Video Previewer is a free, unsupported, tool."
Click here to find out more and download the free application.
Enjoy.
Featured Post
Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...
Friday, September 28, 2012
Thursday, September 27, 2012
FourMatch - the marketing review
I realize that I'll probably upset some people with the comments that I'm about to make. But, I saw the marketing graphics for the new FourMatch Photoshop plug-in from Four and Six ... and I was a little shocked. Read the graphic; Assess reliability of social media images. In the same picture, Authenticate images instantly.
Now, what does that mean to you? The picture of the person with the mobile phone, the text ... with FourMatch, I should be able to authenticate pictures that I find on social media ... right?
WRONG.
From their own blog, "It currently can only analyze images from digital cameras, mobile devices, and tablets ..."
Remember Beckley? Beckley involved the police's use of images from Facebook. The police downloaded pictures from the suspect's Facebook account to use during sentencing - showing gang affiliation - for a sentence enhancement. Big problem. The defendent claimed that the image had been manipulated.
In California, authentication requires either the person in the scene to say ... yes, that's me and that's the scene as I remember it. Or, the photographer says ... yes, that's my picture and it shows the scene accurately, etc. Or, some independent person applies some scientific method to authenticate the image - like you and I.
Authenticate:
1. to establish as genuine.
2. to establish the authorship or origin of conclusively or unquestionably, chiefly by the techniques of scholarship: to authenticate a painting.
3. to make authoritative or valid.
With that in mind, I became interested in the FourMatch project/product. I need something to authenticate images from social media sites. Something that the court will recognize as accurate - and something that will work within the scientific method. Does FourMatch fit the bill for me? Unfortunately, no.
Of the five factors within the scientific method, FourMatch fails two rather significant ones.
First, I can refute the results presented in the window - in a manor of speaking. Here's my test. Let's say that I take a picture with my own phone. I upload the picture to my Facebook account. I download the picture to my computer. I load the downloaded image into Photoshop and use FourMatch - it flags it with the yellow ... probably not authenic, been processed flag. What?! It's my photo.
I'm the photographer, it's my phone, and it's my Facebook account. Who is right? Me, or the FourMatch determination that it's been manipulated and might not be authentic? I worry that, given the people involved in the project, that more weight would be given to the results of the plug-in's tests than my own testimony that it's my picture. But nevertheless, the content and context wasn't changed in any way. The only thing that happened was the upload - where Facebook recompresses the image. Facebook does that to every image. With this in mind, how can the marketing statement in the graphic (above, from their web site) be true?
In a blog post, the creators say, "It provides objective evidence that a file was not touched by any software application since the time it was first captured." If this is the case, that the plug-in works only on images that are direct from the camera and untouched by any application (as noted above, from their blog) - then you can't use the plug-in to authenticate or even assess images from social media. Social media sites recompress images on upload.
Then, there's the error rate issue. What's the instance of false negatives or false positives? FourMatch is a database driven product. It looks at the image vs. a database of camera info. For my assessment, using a popular US purchased mobile phone to take an image, the signature from the phone's imager will likely be found in the FourMatch database and it should give the image the green light. Yet, what about images from other phones, more obscure phones (not contained within the database) that are flagged yellow - further review. This is the same problem that I had with JPEG Snoop. Does the yellow, needs additional inquiry flag qualify as a false negative? Sure, for the Facebook tests FourMatch is right - it's been touched. But, it's possible to authenticate the image by other means. Again, does this qualify as a false negative? Would you be comfortable arguing this point in court?
If you hadn't read this, and you owned FourMatch ... it yellow-flagged your image, would you use the image in your case? Would you confidently proceed? Or, would you move on? In triage - it says the image has been processed, touched by software. What would you do?
So, does FourMatch provide objective evidence that a file wasn't touched by any software since capture? For the most part, yes - as long as the capture device's information is in the FourMatch database. In my tests, the database's problems were centered around mobile phones. Precisely the types of photos that we're concerned with. More people are ditching their point-and-shoot cameras in favour of the camera in their mobile. After all, who wants to carry around two pieces of gear when the cell phone pics can be directly uploaded to social media. People tend to flow to the easiest option. Remember, social media images largely come from we the people, not photojournalists with professional cameras.
Does FourMatch live up to it's own marketing slogan, assess the reliablility of social media images? No - it can't. Social media images are recompressed - touched by software.
If FourMatch really did work to authenicate social media images, it would be worth the price. I'd even pay a little more. But, most of my authenication requests come from folks who do not have the camera - or the allegations involve content/context issues (the other 2 of the 3 F's). As such, I'd have trouble justifying the expense for a database oriented triage tool.
For media outlets looking to verify the integrity of photos received from their field photographers and other sources ... I'm sure that this is a great tool and the editors will appreciate its ease of use. Generally, the cameras used by photojournalists will be present in the FourMatch database. But for law enforcement and criminal justice employees looking to authenticate social media images ... sorry. Wrong product.
Wednesday, September 26, 2012
Kinesense Player Manager special offer
Last month, I've reviewed the Kinesense Player Manager. Well, this month, the company is sweetening the deal ... offering a discount for folks who download and try the product.
Introductory offer: Download a Free 30 day Trial and get a 50% discount on your first order! The Player Manager is available as a single CPU licence (Normal price $500/ €389/ £298). Enterprise wide licences are available (discount pricing applies).
How cool is that?!
Introductory offer: Download a Free 30 day Trial and get a 50% discount on your first order! The Player Manager is available as a single CPU licence (Normal price $500/ €389/ £298). Enterprise wide licences are available (discount pricing applies).
How cool is that?!
Tuesday, September 25, 2012
Authentication of images
Four and Six announced the release of their product recently. According to the company, "... Last month, I wrote a blog post about using the “3 F’s” to determine the authenticity of a photo. FourMatch is a tool for examining the first of these F’s, the File. Unlike many of the techniques we’ve detailed in this blog over the past year, FourMatch does not examine the photo itself, so it doesn’t look for inconsistencies in the image. Tools to handle these other F’s—Footprints and Flaws—will come later."
Ok. I'm not ready to publish my opinion on the product yet. However, I'll invite you to poke around their web site and ask these simple questions, given tight budgets and cutbacks, is FourMatch worth the $890 price tag? Does it add enough value to your workflow to justify the initial purchase price, plus the annual maintenance fees? Again, I'm not even hinting at my own opion in this matter - just asking you to formulate yours.
Enjoy.
Ok. I'm not ready to publish my opinion on the product yet. However, I'll invite you to poke around their web site and ask these simple questions, given tight budgets and cutbacks, is FourMatch worth the $890 price tag? Does it add enough value to your workflow to justify the initial purchase price, plus the annual maintenance fees? Again, I'm not even hinting at my own opion in this matter - just asking you to formulate yours.
Enjoy.
Monday, September 24, 2012
Fixing Field Shift
Often times, compression schemes conspire to ruin your ability to discern license plate number or other important details. Sometime, tearing - or field shifting - can get in the way of producing a clear image for court or for a BOLO flyer. Thankfully, there's an easy fix.
Here we have a typical traffic camera view. The camera is positioned so license/number plate details can be captured. But, the speed of the car, with the recording technology, makes it such that quickly viewing and recognizing the plate info is not possible (or accurate).
In AmpedFIVE, the fix is easy.
From the Filter Panel, select Field Shift.
The Filter Settings allow you to nudge the fields (both Upper and Lower) both Horizontally and Vertically. The results are updated in real time. In this case, the solution is just a few clicks away.
For presentation purposes, I used the Compare Original filter, then wrote out the file to post here. I think that you'll agree, fixing field shift problems in less than a minute - in a reliable and repeatable way, one that's supported by science and academic references - is a really cool thing given our growing workloads.
Enjoy.
Here we have a typical traffic camera view. The camera is positioned so license/number plate details can be captured. But, the speed of the car, with the recording technology, makes it such that quickly viewing and recognizing the plate info is not possible (or accurate).
In AmpedFIVE, the fix is easy.
From the Filter Panel, select Field Shift.
The Filter Settings allow you to nudge the fields (both Upper and Lower) both Horizontally and Vertically. The results are updated in real time. In this case, the solution is just a few clicks away.
For presentation purposes, I used the Compare Original filter, then wrote out the file to post here. I think that you'll agree, fixing field shift problems in less than a minute - in a reliable and repeatable way, one that's supported by science and academic references - is a really cool thing given our growing workloads.
Enjoy.
Friday, September 21, 2012
Image processing is a subclass of signal processing
"... we may acquire a natural image, process it to enhance the picture, compress it for transmission, and then encode and transmit it in some fashion over a digital network. On the other end, the image is decoded, decompressed, and displayed to create another signal (the visible light of the display). But that isn’t the end of the story: the signal is then received by the eyes, processed further, and interpreted in some fashion by our brain. From acquisition to interpretation, the initial signal may be transformed, modified, and retransmitted numerous times. In this example, the signal underwent more than 10 transformations ..."
When a signal has continuous domain and range, we usually call it analog. When a signal has discrete domain and range, we call it digital.
"... The spacing of discrete values in the domain of a signal is called the sampling of that signal. This is usually described in terms of some sampling rate–how many samples are taken per unit of each dimension. Examples include “samples per second”, “frames per second”, etc.
The spacing of discrete values in the range of a signal is called the quantization of that signal. Quantization is usually thought of as the number of bits per sample of the signal. Examples include “black and white images” (1 bit per pixel), “16-bit audio”, “24-bit color images”, etc ..." - B. Morse, BYU, On Signals and Images
When a signal has continuous domain and range, we usually call it analog. When a signal has discrete domain and range, we call it digital.
"... The spacing of discrete values in the domain of a signal is called the sampling of that signal. This is usually described in terms of some sampling rate–how many samples are taken per unit of each dimension. Examples include “samples per second”, “frames per second”, etc.
The spacing of discrete values in the range of a signal is called the quantization of that signal. Quantization is usually thought of as the number of bits per sample of the signal. Examples include “black and white images” (1 bit per pixel), “16-bit audio”, “24-bit color images”, etc ..." - B. Morse, BYU, On Signals and Images
Thursday, September 20, 2012
Update to AmpedFIVE
There a new update to AmpedFIVE that addresses the issue of the potential for the addition of colour information when performing unsharp mask or laplacian sharpening. Build 4156 adds the choice between Intensity and Color mode in Unsharp Masking and Laplacian Sharpening.
We've done this exercise in my Photoshop classes. How do you perform USM without adding colour to your image? How do you answer the answer, "Did you add anything to this image?" if you use USM in your workflow - and don't control the processes in some way so as to limit the adjustments to only the lightness/intensity information.
There's a number of ways to accomplish this task in Photoshop, as long as you know what's happening. You can work in LAB, and only sharpen the L channel. You can work on an adjustment layer and control the effect with the appropriate blending mode. In AmpedFIVE, it's now much easier - just select Intensity when performing your sharpening. How cool is that?!
Enjoy.
We've done this exercise in my Photoshop classes. How do you perform USM without adding colour to your image? How do you answer the answer, "Did you add anything to this image?" if you use USM in your workflow - and don't control the processes in some way so as to limit the adjustments to only the lightness/intensity information.
There's a number of ways to accomplish this task in Photoshop, as long as you know what's happening. You can work in LAB, and only sharpen the L channel. You can work on an adjustment layer and control the effect with the appropriate blending mode. In AmpedFIVE, it's now much easier - just select Intensity when performing your sharpening. How cool is that?!
Enjoy.
Wednesday, September 19, 2012
The nature of perception
From David Kelley's classic, The Evidence of the Senses: " ... The normal adult human is not a tabula rasa. We bring to perception an enormous fund of background knowledge, and the act of perceiving is usually guided by the conscious purposes of expanding or applying that knowledge. Consider, for example, the role of attention. At any moment, the way in which one directs his attention affects what is perceived from among all the things that could be perceived, given present stimulation. It is doubtful that an act of attention is necessary for perceiving every object. Stimuli such as sudden chills, loud explosions, or noxious odors intrude themselves upon us quite apart from any conscious choice. But attention certainly can make the difference between perceiving or not perceiving an object-the classic examples being one's awareness of his clothes or of a low hum in the background. And the degree of attention to an object has a marked effect on the character of one's perceptual awareness of it. Thus there are major differences in the scope, clarity, and specificity of awareness-that is, in how much of the object is perceived-along a continuum of attention from objects at the edge of the visual field, to pocket change fingered absentmindedly, to the road ahead while one is driving, to a pen one has found at last among the litter on the desk, to the face of another person in conversation, to the riveting sound of a scratching at one's door late at night. ..."
The history of what a person attends to affects what it is possible for him to perceive in a given situation. Attention is the major factor in perceptual learning.
As you can see now, after a few posts on David Kelley's book, that the science of perception - and the philosophy of perception - are quite different.
Enjoy.
The history of what a person attends to affects what it is possible for him to perceive in a given situation. Attention is the major factor in perceptual learning.
As you can see now, after a few posts on David Kelley's book, that the science of perception - and the philosophy of perception - are quite different.
Enjoy.
Tuesday, September 18, 2012
Perceptual Judgments
From David Kelley's classic, The Evidence of the Senses: " ... The perceptual judgment is the conceptual identification of what is perceived. Transforming our perceptual awareness of the world into conceptual form, it gives us a way to retain and communicate what we perceive and to express the evidence of the senses in a way that can bring it to bear on abstract conclusions. My approach to the perceptual judgment is centered on this fact; my goal is to understand the link it creates between the perceptual and the conceptual awareness of objects ..."
A perceptual judgment is a belief about a particular object present to the senses.
He goes on: "... A perceptual judgment is justified by the way an object appears to a perceiver. Because the thing before him looks a certain way, he can identify it as a tree, as green, and so on. The appearance will not justify predicating these attributes of any object at random, however, but only of the particular item that appears, and only if it is picked out as a particular item. Thus I might "see" a camouflaged soldier in the sense that his facing surfaces are parts of my field of view, but I am not in a position to form a judgment about him. Nor would the appearance of the field justify such a judgment, unless I can isolate the soldier as a figure against the ground. If I cannot isolate him, but only the various patches of color that in fact are parts of his clothing, then I can form justified judgments only about those patches. This is one reason why it is important that what we discriminate in vision is ,the entity itself, not merely its facing surface ..."
This is great stuff. If you haven't read the book, I highly recommend it. Best of all, it's now free.
Enjoy.
A perceptual judgment is a belief about a particular object present to the senses.
He goes on: "... A perceptual judgment is justified by the way an object appears to a perceiver. Because the thing before him looks a certain way, he can identify it as a tree, as green, and so on. The appearance will not justify predicating these attributes of any object at random, however, but only of the particular item that appears, and only if it is picked out as a particular item. Thus I might "see" a camouflaged soldier in the sense that his facing surfaces are parts of my field of view, but I am not in a position to form a judgment about him. Nor would the appearance of the field justify such a judgment, unless I can isolate the soldier as a figure against the ground. If I cannot isolate him, but only the various patches of color that in fact are parts of his clothing, then I can form justified judgments only about those patches. This is one reason why it is important that what we discriminate in vision is ,the entity itself, not merely its facing surface ..."
This is great stuff. If you haven't read the book, I highly recommend it. Best of all, it's now free.
Enjoy.
Monday, September 17, 2012
The Evidence of the Senses
Wonderful news just hit my inbox. David Kelley's classic, The Evidence of the Senses, is now available on Scribd.com, the world's largest online library.
In this highly original defense of realism, David Kelley argues that perception is the discrimination of objects as entities, that the awareness of these objects is direct, and that perception is a reliable foundation for empirical knowledge. His argument relies on the basic principle of the "primacy of existence," in opposition to Cartesian representationalism and Kantian idealism.
This is a wonderful book for building up your vocabulary on perception as well as understanding the philosophy of perception from multiple points of view - given that we deal with how people perceive events, objects, sounds, etc. in our line of work.
Enjoy.
In this highly original defense of realism, David Kelley argues that perception is the discrimination of objects as entities, that the awareness of these objects is direct, and that perception is a reliable foundation for empirical knowledge. His argument relies on the basic principle of the "primacy of existence," in opposition to Cartesian representationalism and Kantian idealism.
This is a wonderful book for building up your vocabulary on perception as well as understanding the philosophy of perception from multiple points of view - given that we deal with how people perceive events, objects, sounds, etc. in our line of work.
Enjoy.
Friday, September 14, 2012
Open proprietary or corrupted video files
Amped Software just announced two new features contained in their latest update to AmpedFive.
File>DVR Change Container to Avi - " ... we discovered that we are able to change the container of a proprietary video file to avi and without any transcoding (meaning loss of quality) and then able to open it in Five with our standard video engines. It works only with some proprietary formats, I dare say in roughly 20-30% of cases, but in those situations this can really saves a lot of time and headaches ..." I was able to change a native Bosch file using this method. It's a cool addition to the program. In my tests, it was able to correctly process .264 files from Clover, Q-See, and Swann DVRs (really popular out here in SoCal) and work with them in FIVE.
File>DVR Convert to Uncompressed Avi - " ... We’ve worked hard to create a video decoding engine able to precisely seek frame by frame; which can be another problem with proprietary codec files. This can help with the amount of different and often non-standard compliant video formats where the frame rate is unstable. In these type of files, where the codec has bugs or is mildly corrupted (like many el cheapo DVR brands seem to produce) the seek may have major issues or won’t work at all. This tool works in a similar way to the previous one, but converts any supported format to a raw uncompressed avi. This means that the output video file will be decoded very easily by any software with no need for any codec, and the frame-by-frame seek will be always perfect ..."
Can I just say that it's refreshing to see a vendor working on the needs of image and video analysts in real time - not on an 18 month sales cycle. Users submit ideas and issues, like George asking about the Channel Mixer, and Amped responds.
Enjoy.
File>DVR Change Container to Avi - " ... we discovered that we are able to change the container of a proprietary video file to avi and without any transcoding (meaning loss of quality) and then able to open it in Five with our standard video engines. It works only with some proprietary formats, I dare say in roughly 20-30% of cases, but in those situations this can really saves a lot of time and headaches ..." I was able to change a native Bosch file using this method. It's a cool addition to the program. In my tests, it was able to correctly process .264 files from Clover, Q-See, and Swann DVRs (really popular out here in SoCal) and work with them in FIVE.
File>DVR Convert to Uncompressed Avi - " ... We’ve worked hard to create a video decoding engine able to precisely seek frame by frame; which can be another problem with proprietary codec files. This can help with the amount of different and often non-standard compliant video formats where the frame rate is unstable. In these type of files, where the codec has bugs or is mildly corrupted (like many el cheapo DVR brands seem to produce) the seek may have major issues or won’t work at all. This tool works in a similar way to the previous one, but converts any supported format to a raw uncompressed avi. This means that the output video file will be decoded very easily by any software with no need for any codec, and the frame-by-frame seek will be always perfect ..."
Can I just say that it's refreshing to see a vendor working on the needs of image and video analysts in real time - not on an 18 month sales cycle. Users submit ideas and issues, like George asking about the Channel Mixer, and Amped responds.
Enjoy.
Point Spread Function
What is point spread function? What does it have to do with image restoration, clarification, or enhancement?
For the purposes of those arriving here from a Bing search looking for spread functions, PSF is not a type of gambling where the point spread is essentially a handicap towards the underdog. Thus the wager becomes "Will the favorite win by more than the point spread?" The point spread can be moved to any level to create an equal number of participants on each side of the wager. PSF is not a way to make money gambling.
Remember that image restoration refers to the removal or minimization of known degradations in an image. This includes deblurring of images degraded by the limitations of the sensor or its environment, noise filtering, and correction of geometric distortions or non-linearities due to sensors. The point spread function describes the imaging system's response to a point input. A point input, represented as a single pixel in the “ideal” image, will be reproduced as something other than a single pixel in the “real” image.
So, in essence, a PSF deals with a point that spreads. A point that spreads is essentially what happens when you think of blur (motion blur, camera shake, etc). The effect of motion blur transforms points into lines. Therefore, we have to find in the image a line that originally should had been a point and in the photo came up as a line.
I'll illustrate how to fix the problem of motion blur soon. But for now, I just wanted to introduce the idea of PSF.
Enjoy.
For the purposes of those arriving here from a Bing search looking for spread functions, PSF is not a type of gambling where the point spread is essentially a handicap towards the underdog. Thus the wager becomes "Will the favorite win by more than the point spread?" The point spread can be moved to any level to create an equal number of participants on each side of the wager. PSF is not a way to make money gambling.
Remember that image restoration refers to the removal or minimization of known degradations in an image. This includes deblurring of images degraded by the limitations of the sensor or its environment, noise filtering, and correction of geometric distortions or non-linearities due to sensors. The point spread function describes the imaging system's response to a point input. A point input, represented as a single pixel in the “ideal” image, will be reproduced as something other than a single pixel in the “real” image.
So, in essence, a PSF deals with a point that spreads. A point that spreads is essentially what happens when you think of blur (motion blur, camera shake, etc). The effect of motion blur transforms points into lines. Therefore, we have to find in the image a line that originally should had been a point and in the photo came up as a line.
I'll illustrate how to fix the problem of motion blur soon. But for now, I just wanted to introduce the idea of PSF.
Enjoy.
Thursday, September 13, 2012
Careful inspection of DME output
Here's the scenario:
- I navigate through the DVR's menu and select two camera views for export, date, time, etc.
- I export the selected views to my USB stick.
- I open the USB stick on my laptop to check the contents. I've got 3 files, a player and two files of video (I hope).
- I launch the player and verify that I've got the requested video. Date/time/view checks out - I've got what I want.
Back in the lab ...
- I want to generate still images. The player has no image save functionality. It does, however, have an AVI save option.
- I export the file to AVI. No options are available (codec, uncompressed, etc.)
- I launch the resulting file in GOM.
- I notice something rather interesting - in the DVR player software, the requested/saved view is displayed. There is no option to go to multi-cam view or to view other camera feeds. It simply shows the requested exported file. But ... the AVI has all the camera views (16 in this case) for the requested time/date. Hmm...
So, I opened the file in AmpedFIVE. Sure enough, I can demux all the views, see each camera, and work with the file to generate still images.
I've tried to look into the DVR manufacturer, but this is all I have to go on. I'm sure that you've seen similarly limited About screens.
So the lesson here is that there might be more to your DME than meets the eye.
Wednesday, September 12, 2012
Is the scientific method necessary in legal work?
Think you needn't be concerned with science and the scientific method if you're engaged in FVA, or work with DME or other electronic forensic science discipline? Think again.
Here's a section from an explanation of the Daubert decision:
Scientific knowledge = scientific method/methodology: A conclusion will qualify as scientific knowledge if the proponent can demonstrate that it is the product of sound "scientific methodology" derived from the scientific method.
Factors relevant: The Court defined "scientific methodology" as the process of formulating hypotheses and then conducting experiments to prove or falsify the hypothesis, and provided a nondispositive, nonexclusive, "flexible" set of "general observations" (i.e. not a "test") that it considered relevant for establishing the "validity" of scientific testimony:
Here's a section from an explanation of the Daubert decision:
Scientific knowledge = scientific method/methodology: A conclusion will qualify as scientific knowledge if the proponent can demonstrate that it is the product of sound "scientific methodology" derived from the scientific method.
Factors relevant: The Court defined "scientific methodology" as the process of formulating hypotheses and then conducting experiments to prove or falsify the hypothesis, and provided a nondispositive, nonexclusive, "flexible" set of "general observations" (i.e. not a "test") that it considered relevant for establishing the "validity" of scientific testimony:
- Empirical testing: whether the theory or technique is falsifiable, refutable, and/or testable.
- Whether it has been subjected to peer review and publication.
- The known or potential error rate.
- The existence and maintenance of standards and controls concerning its operation.
- The degree to which the theory and technique is generally accepted by a relevant scientific community.
So here's a philosophical question: does the fact that you don't work for the US federal government, or that you don't work in a state that adopted the Daubert standard mean that you should not be bound by sound scientific methodology in your forensic science work?
Tuesday, September 11, 2012
Why Are Theories Important?
Not surprising, then, scientific research is based on theory -- all scientific research. Put another way, you can conduct research that is not theory-based -- but it is not science. Even the most rigorous, detailed research that fails to build on or contribute to a theoretical framework does not meet the true test of scientific discovery. It creates knowledge -- but not science. I personally have a very hard position on this. There’s lots of research, but a lot of it is not science.
So, why are theories important?
So, why are theories important?
- They explain the relationships between two or more different phenomena.
- They unify observable phenomena.
- They permit us to formulate hypotheses or propositions.
- They raise research from the descriptive to the explanatory.
If you think that you are engaged in forensic science, check your assumptions, your hypothesis, and your theoretical framework.
Monday, September 10, 2012
What's a theory?
Theory: The Layman’s Usage
The popular definition of a theory is that it is a guess, a speculation or a suggestion. “My theory is that he is German, but he could be Swiss.” This is not at all what is meant by a scientific theory
Theory: Scientific Usage
A scientific theory is a unifying and self-consistent explanation of fundamental processes or phenomena that is totally constructed of corroborated hypotheses. It is as far as you can get from a guess. It is built on reliable knowledge acquired through rigorous research.
The Centrality of Theory
Theory is central to science because theory unifies knowledge. Scientific theories explain by unifying many once-unrelated facts or corroborated hypotheses. Theories are the strongest and most powerful explanations of how the universe works. Theories can only be disproved. By definition, if we still consider it a theory, it hasn’t been disproved yet.
Source: M.E. Swisher, University of Florida IFAS
The popular definition of a theory is that it is a guess, a speculation or a suggestion. “My theory is that he is German, but he could be Swiss.” This is not at all what is meant by a scientific theory
Theory: Scientific Usage
A scientific theory is a unifying and self-consistent explanation of fundamental processes or phenomena that is totally constructed of corroborated hypotheses. It is as far as you can get from a guess. It is built on reliable knowledge acquired through rigorous research.
The Centrality of Theory
Theory is central to science because theory unifies knowledge. Scientific theories explain by unifying many once-unrelated facts or corroborated hypotheses. Theories are the strongest and most powerful explanations of how the universe works. Theories can only be disproved. By definition, if we still consider it a theory, it hasn’t been disproved yet.
Source: M.E. Swisher, University of Florida IFAS
Friday, September 7, 2012
Training schedule
I'm hitting the road this fall. When the City's broke and can't pay overtime, they give time on the books. What better way to spend the time off than going out and training folks.
First on the list is HTCIA. I'll be there helping FinalData spread the good news about their superior analysis tools (I was at HTCIA last year talking about using FinalMobile to get everything off Samsung handsets). Living and working in a largely CDMA area, FinalMobile my tool of choice in assuring that I get a proper analysis - and get everything - off a mobile device. Yes, I know, I'm the video guy. But, video and images come from all sorts of places - including mobile devices like phones, iPads, iPods, and such. You'll want the best toolset to get the evidence, which is why FinalMobile is part of mine. (Yes, I know, Cellebrite UFED is easier to use ... but it doesn't always get everything off the device. FinalMobile will even work with a Cellebrite physical dump - cool.)
Next, it's off to Vegas (baby) to conduct a week-long session for Amped on AmpedFive. Although they're a bit quiet about it, you can bundle training into your software purchase in 2 day (Basic), three day (Basic - Intermediate), or 5 day (Basic - Advanced) sessions. They're really flexible about how they handle their training, either at their North American HQ in Las Vegas, in my training centre in Pasadena, Ca, or at your location. All it takes is a call or e-mail to set it up.
Then, after a brief rest, it's off to the LEVA conference in San Diego to present a session on Image Processing Fundamentals. Like I'm doing here on the blog, at the conference I'll be discussing the science behind the tools that we use every day and how it can be incorporated into your FVA testimony.
Finally, I'll be at the NaTIA Pacific Chapter conference talking about image authentication and some new tools that are coming on the market. The date/location is still TBA.
It's been a busy year, and this fall is no different. Regardless of who I'm working with, or the topic that I'm presenting, I'll always bring the viewpoint of helping us help the trier of fact correctly present, view, and interpret digital multimedia evidence. As such, I won't let blind loyalties to a single vendor's product get in the way of informing you as to what's out there and how to use it appropriately.
Enjoy.
First on the list is HTCIA. I'll be there helping FinalData spread the good news about their superior analysis tools (I was at HTCIA last year talking about using FinalMobile to get everything off Samsung handsets). Living and working in a largely CDMA area, FinalMobile my tool of choice in assuring that I get a proper analysis - and get everything - off a mobile device. Yes, I know, I'm the video guy. But, video and images come from all sorts of places - including mobile devices like phones, iPads, iPods, and such. You'll want the best toolset to get the evidence, which is why FinalMobile is part of mine. (Yes, I know, Cellebrite UFED is easier to use ... but it doesn't always get everything off the device. FinalMobile will even work with a Cellebrite physical dump - cool.)
Next, it's off to Vegas (baby) to conduct a week-long session for Amped on AmpedFive. Although they're a bit quiet about it, you can bundle training into your software purchase in 2 day (Basic), three day (Basic - Intermediate), or 5 day (Basic - Advanced) sessions. They're really flexible about how they handle their training, either at their North American HQ in Las Vegas, in my training centre in Pasadena, Ca, or at your location. All it takes is a call or e-mail to set it up.
Then, after a brief rest, it's off to the LEVA conference in San Diego to present a session on Image Processing Fundamentals. Like I'm doing here on the blog, at the conference I'll be discussing the science behind the tools that we use every day and how it can be incorporated into your FVA testimony.
Finally, I'll be at the NaTIA Pacific Chapter conference talking about image authentication and some new tools that are coming on the market. The date/location is still TBA.
It's been a busy year, and this fall is no different. Regardless of who I'm working with, or the topic that I'm presenting, I'll always bring the viewpoint of helping us help the trier of fact correctly present, view, and interpret digital multimedia evidence. As such, I won't let blind loyalties to a single vendor's product get in the way of informing you as to what's out there and how to use it appropriately.
Enjoy.
Thursday, September 6, 2012
PixInsight first look
Wow. If you want total control over every function involved in image clarification, PixInsight will blow your mind.
First of all - every clarification function that you want to do to an image is in PixInsight. You'll need to know about image processing fundamentals to know what it's called within the software, but it's in there.
So, I loaded an image - the famous SUV example - and popped open the CurvesTransformation process.
Very simply, this one panel allows me to control everything. Each of the RGB channels, the RGB/K, the LAB mode channels, hue, and saturation - without converting modes or switching to a new panel. All of that control in a single panel. Cool! In less than 30 seconds, I corrected the image.
So far ... I'm liking what I'm seeing. All this and FFT too ... !
Enjoy.
So, I loaded an image - the famous SUV example - and popped open the CurvesTransformation process.
Very simply, this one panel allows me to control everything. Each of the RGB channels, the RGB/K, the LAB mode channels, hue, and saturation - without converting modes or switching to a new panel. All of that control in a single panel. Cool! In less than 30 seconds, I corrected the image.
So far ... I'm liking what I'm seeing. All this and FFT too ... !
Enjoy.
Wednesday, September 5, 2012
PixInsight vs. Photoshop
A reader sent me a tip about a specialty application called PixInsight, asking what I thought about it for image analysis. So, I sent them a request to review their software. Never heard of them? Here's where they place themselves on the image processing spectrum:
"PixInsight and Photoshop are two very different applications. They are in fact so different in their goals, in the way they have been conceived, and in how they are being developed, that we actually think that PixInsight and Photoshop represent, in many aspects, two opposite ways of understanding image processing. So if you are using Photoshop, or a similar application, then PixInsight cannot be a replacement: it can only be a change.
PixInsight pursues a scientific, highly technical approach to image processing. Most of our tools have been designed to solve the problems specific to astrophotography and other technical imaging fields through rigorous and flexible implementations, where the user has full control on every relevant parameter of each applied process. While we try to design and implement our tools to facilitate the user's work as much as possible, ease of use is not one of our main goals. In general, we make no concessions to simplification: there are no fast-food solutions in PixInsight. Versatility, efficiency, powerful tools, rigorous implementations and the development of astrophotography through image processing culture are the main elements of our vision and our mission as the developers of PixInsight.
Photoshop has not been conceived or designed to solve the kind of problems that arise in highly technical imaging fields, of which astrophotography is one of the most demanding ones. Photoshop is a general-purpose image edition application. It is excellent for image edition and retouching but it doesn't qualify for astrophotography because it lacks the necessary algorithms and tools; it is simply not based on the correct principles to provide the required solutions. Photoshop pursues a simplified approach to image manipulation, where the user has little or no control over the applied processes. Due to its lack of resources and to the inadequacy of its implementations, Photoshop is being applied to astrophotography through tricky procedures, including arbitrary manual manipulations and retouching practices without documentary and algorithmic basis that we consider unacceptable in astrophotography.
Hand-painted masks, arbitrary manual selections, retouching, unrigorous layering techniques and other 'magical recipes' are just the opposite of what we understand by astrophotography. Contrarily to what it may seem at first sight, these procedures tend to block your creativity: they teach you nothing about your data and don't require you to understand your images and the actual problems you have to face and solve to build them.
PixInsight provides you with a completely different platform where you can develop your astrophotography with solid foundations. With PixInsight we want to grow your image processing knowledge, as the best way to materialize your creativity and your pursuit of excellence.
The PixInsight project originates from the inside of astrophotography. It is a software platform made by astrophotographers for astrophotographers. PixInsight is not a general-purpose product made by a large multinational company with mass-market interests. It is a highly specialized platform in constant evolution, for which astrophotography is its natural ecosystem. PixInsight is available as native 32-bit and 64-bit applications on FreeBSD, Linux, Mac OS X and Windows, and a single commercial license allows you to install and run PixInsight on any machine you own, on all supported platforms, with unlimited free updates and support. The development of PixInsight is open to the community through an integrated scripting environment (PJSR, the PixInsight JavaScript Runtime) and a comprehensive, cross-platform C++ module development framework (PCL, the PixInsight Class Library). PixInsight is our personal project and we back it with hard work every day. We are close to you and easy to reach. We are different."
Here's me smiling - I love it when geeks talk smack.
Enjoy.
"PixInsight and Photoshop are two very different applications. They are in fact so different in their goals, in the way they have been conceived, and in how they are being developed, that we actually think that PixInsight and Photoshop represent, in many aspects, two opposite ways of understanding image processing. So if you are using Photoshop, or a similar application, then PixInsight cannot be a replacement: it can only be a change.
PixInsight pursues a scientific, highly technical approach to image processing. Most of our tools have been designed to solve the problems specific to astrophotography and other technical imaging fields through rigorous and flexible implementations, where the user has full control on every relevant parameter of each applied process. While we try to design and implement our tools to facilitate the user's work as much as possible, ease of use is not one of our main goals. In general, we make no concessions to simplification: there are no fast-food solutions in PixInsight. Versatility, efficiency, powerful tools, rigorous implementations and the development of astrophotography through image processing culture are the main elements of our vision and our mission as the developers of PixInsight.
Photoshop has not been conceived or designed to solve the kind of problems that arise in highly technical imaging fields, of which astrophotography is one of the most demanding ones. Photoshop is a general-purpose image edition application. It is excellent for image edition and retouching but it doesn't qualify for astrophotography because it lacks the necessary algorithms and tools; it is simply not based on the correct principles to provide the required solutions. Photoshop pursues a simplified approach to image manipulation, where the user has little or no control over the applied processes. Due to its lack of resources and to the inadequacy of its implementations, Photoshop is being applied to astrophotography through tricky procedures, including arbitrary manual manipulations and retouching practices without documentary and algorithmic basis that we consider unacceptable in astrophotography.
Hand-painted masks, arbitrary manual selections, retouching, unrigorous layering techniques and other 'magical recipes' are just the opposite of what we understand by astrophotography. Contrarily to what it may seem at first sight, these procedures tend to block your creativity: they teach you nothing about your data and don't require you to understand your images and the actual problems you have to face and solve to build them.
PixInsight provides you with a completely different platform where you can develop your astrophotography with solid foundations. With PixInsight we want to grow your image processing knowledge, as the best way to materialize your creativity and your pursuit of excellence.
The PixInsight project originates from the inside of astrophotography. It is a software platform made by astrophotographers for astrophotographers. PixInsight is not a general-purpose product made by a large multinational company with mass-market interests. It is a highly specialized platform in constant evolution, for which astrophotography is its natural ecosystem. PixInsight is available as native 32-bit and 64-bit applications on FreeBSD, Linux, Mac OS X and Windows, and a single commercial license allows you to install and run PixInsight on any machine you own, on all supported platforms, with unlimited free updates and support. The development of PixInsight is open to the community through an integrated scripting environment (PJSR, the PixInsight JavaScript Runtime) and a comprehensive, cross-platform C++ module development framework (PCL, the PixInsight Class Library). PixInsight is our personal project and we back it with hard work every day. We are close to you and easy to reach. We are different."
Here's me smiling - I love it when geeks talk smack.
Enjoy.
Tuesday, September 4, 2012
Thresholding
Thresholding is the special case of clipping where the output becomes binary (black/white) - a useful process for latent print analysts.
Monday, September 3, 2012
Clipping transformations
The special case of contrast stretching where a =g =0 is called clipping. This is useful for noise reduction when the input signal is known to lie in the range [a,b]. Remember, of course, that excessive noise reduction leads to a loss of detail, and its application is hence subject to a trade-off between the undesirability of the noise itself and that of the reduction of artifacts.
Subscribe to:
Posts (Atom)