As analysts, we rely upon research to form the basis of our work. If we're conducting a 3D measurement exercise utilizing Single View Metrology, as operationalized by the programmers at Amped SRL, we're relying not only upon the research of Antonio Criminisi and his team, but also the programmers who put his team's work into their tool. We trust that all those involved in the process are acting in good faith. More often than not, analysts don't dig around to check the research that forms the foundation of their work.
In my academic life, I've conducted original research, I've supervised the research of others, I teach research methods, I've acted as an anonymous peer-reviewer, and I participate in and supervise an Institutional Review Board (IRB). In my academic life, as well as in my professional life as an analyst, I use the model shown above to guide my research work.
For those that are members of the IAI, and receive the Journal of Forensic Identification, you may have noticed that the latest edition features two letters to the editor that I submitted last summer. For those that don't receive the JFI, you can follow along here but there's no link that I can provide to share the letters as the JFI does not allow the general public to view it's publications. Thus, I'm sharing a bit about my thought process in evaluating the particular article that prompted my letters here on the blog.
The article that prompted my letter dealt with measuring 3D subjects depicted in a 2D medium (Meline, K. A., & Bruehs, W. E. (2018). A comparison of reverse projection and laser scanning photogrammetry. Journal of Forensic Identification, 68(2), 281-292), otherwise known as photogrammetry.
When evaluating research, using the model shown above, one wants to think critically. As a professional, I have to acknowledge that I know both of the authors and have for some time. But, I have to set that aside, mitigating any "team player bias" and evaluate their work on it's own merits. Answers to questions about validity and value should not be influenced by my relationships but by the quality of the research alone.
The first stop on my review is a foundation question, is the work experimental or non-experimental? There is a huge difference between the two. In experimental research an independent variable is manipulated in some way to find out how it will affect the dependent variable. For example, what happens to recorded frame rate in a particular DVR when all camera inputs are under motion or alarm mode? "Let's put them all under load and see" tests the independent variables' (camera inputs) potential influence on the dependent variable (recorded file) to find out if / how one affects the other. In non-experimental research there is no such manipulation, it's largely observational. Experimental research can help to determine causes, as one is controlling the many factors involved. In general, non-experimental research can not help to determine causes. Experimental research is more time consuming to conduct, and thus more costly.
With this in mind, I read the paper looking for a description of the variables involved in the study - what was being studied and how potential influencing factors were being controlled. Finding none, I determined that the study was non-experimental - the researchers simply observed and reported.
The case study featured a single set of equipment and participants. The study did not examine the outputs from a range of randomly chosen DVRs paired with randomly chosen cameras. Nor did the study include a control group of participants. For the comparison of the methods studied, the study did not feature a range of laser scanners or a range of tools in which to create the reverse projection demonstratives. No discussion as to the specifics of the tool choices was given.
For the participants, those performing the measurement exam, no random assignment was used. Particularly troubling, the participants used in the study were co-workers of the researchers. Employees represent a vulnerable study population and problems can arise when these human subjects are not able to properly consent to participating in the research. As an IRB supervisor, this situation raises the issue of potential bias. As an IRB supervisor, I would expect to see a statement about how the bias would be mitigated in the research and that the researchers' IRB had acknowledged and approved of the research design. Unfortunately, no such statement exists in the study. Given that the study was essentially a non-experimental test of human subjects, and not an experimental test of a particular technology or technique, an IRB's approval is a must. One of the two letters that I submitted upbraided the editorial staff of the JFI for not enforcing it's own rules as regards their requirement for an IRB approval statement for tests of human subjects.
Given the lack of control for bias and extraneous variables, the lack of randomness of participant selection, and the basic non-experimental approach of the work, I decided that this paper could not inform my work or my choice in employing a specific tool or technique.
Digging a bit deeper, I looked at the authors' support for statements made - their references. I noticed that they chose to utilize some relatively obscure or largely unavailable sources. The chosen references would be unavailable to the average analyst without the analyst paying hundreds of dollars to check just one of the references. In my position, however, I have access to a world of research for free through my affiliations with various universities. So, I checked their references.
What I found, and thus reported to the Editor, was that many times the cited materials could not be applied to support the statements made in the research. In a few instances, the cited material actually refuted the authors' assertions.
In the majority of the cited materials, the authors noted that their work couldn't be used to inform a wider variety of research, that case-specific validity studies should be conducted by those engaged in the technique described, or that they were simply offering a "proof-of-concept" to the reader.
In evaluating this particular piece of research, I'm not looking to say - "don't do this technique" or "this technique can't be valid." I want to know if I can use the study to inform my own work in performing photogrammetry. Unfortunately, due to the problems noted in my letters, I can't.
If you're interested in engaging in the creation of mixed-methods demonstratives for display in court, Occam's Input-ACE has an overlay feature that allows one to mix the output from a laser scan into a project that contains CCTV footage. The overlay feature is comparable to a "reverse projection" - it's a demonstrative used to illustrate and reinforce testimony. A "reverse projection" demonstrative is not, in and of itself, a measurement exercise / photogrammetry. Though it is possible to set up the demonstrative, then use SVM to measure within the space. If one wants to measure within the space in such a way as a general rule (not case specific), proper validity studies need to be conducted. At the time of the writing of this post, no such validity studies exist for the calculation of measurements with such a mixed measures approach. If one is so inclined, the model above can be used to both design and evaluate such a study. Until then, I'll continue on with Single View Metrology as the only properly validated 3D measurement technique for 3D subjects / objects depicted within a 2D medium.
No comments:
Post a Comment