Featured Post

Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...

Tuesday, August 13, 2019

Four Kinds of Science

As an non-verbal autistic person, "language" has always been an issue for me. If you've seen me teaching / talking during a class or a workshop, this "version" of me is akin to Jim 4.0. I haven't always been extemporaneously vocal. For me, this skill was added in my late 20's and early 30's.

Because of my "issues" with language, I've chosen to participate in the standards setting bodies and assist in the creation of clearly worded documents. Words mean something. Some words mean multiple things - yes English is a crazy language. I've tried to suggest words with single meanings, eliminating uncertainty and ambiguity in our documents.

To summarize der Kiureghian and Ditlevsen (2009), although there is no unanimously approved interpretation of the concept of uncertainty, in a computational or a real-world situation, uncertainty can be described as a state of having incomplete, imperfect and/or inconsistent knowledge about an event, process or system. The type of uncertainty in our documents that is of particular concern to me is epistemic uncertainty. The word ‘epistemic’ comes from the Greek ‘επιστηµη’ (episteme), which means 'knowledge.' Epistemic uncertainty is the uncertainty that is presumed as being caused by lack of knowledge or data. Also known as systematic uncertainty, epistemic uncertainty is due to things one could know in principle but doesn't in practice. This may be because a measurement is not accurate, or because the model neglects certain effects, or because particular data has been deliberately hidden.

With epistemic uncertainty in mind, I want to revisit the concept of "science" - as in "forensic science." The problem for me, for my autistic brain's processing of people's use of  the term "forensic science" is that I believe that by their practice (in their work) they're emphasizing the "forensic" part (as in forum - debate, discussion, rhetoric) and de-emphasizing the "science" part. Indeed, do we even know what the word "science" means in this context?

According to Mayper and Pula, in their workshops on the epistemology of science as a human issue, there are four kinds of science:

  1. Accepted Science - theories that are not yet refuted, after rigorous tests. Counter-examples must be accounted for or shown to be in error. Theories “tentative for ever”, but not discarded frivolously. Good replacements are not easily come by. A new theory must account for not only the data that the old theory doesn’t, but also all the old data that the old theory does.
  2. Erroneous Science - theories that are not yet refuted, but are tested by false data:
    1. Fake Science — scientist intentionally deceives others
    2. Mistaken Science — scientist unintentionally deceives self (and others)
  3. Pseudoscience - theories inconsistent with accepted science, attempts to refute them avoided or ignored, e.g. astrology, numerology, biorhythms; dowsing,
  4. Fringe Science - theories inconsistent with accepted science, not yet refuted, but attempts to do so invited, e.g. Unified field theories (data accumulate faster than theory construction), Rupert Sheldrake’s “morphogenetic fields”, Schmidt’s ESP findings, etc. 

I think a few of the popular practices in the "forensic sciences," like "headlight spread pattern analysis" and the merging of laser scans and CCTV images to measure items present in the CCTV footage that aren't present in the laser scan currently qualify as pseudoscience. Why? They haven't been validated. Questions about validation often are sidestepped and users focus on specific legal cases where the technique was employed and subsequently allowed in as demonstrative evidence. The way in which these two techniques are employed are inconsistent with accepted science - math, stats, logic ...

Indeed, to qualify as accepted science, these new techniques must "account for not only the data that the old theory doesn’t, but also all the old data that the old theory does." But, in doing so, must it also account for the existing "rules.?" For example, with Single Image Photogrammetry, the error potential (range) increases as the distance from the camera to the subject / object increases. The farther away the thing / person is from the camera, the greater the error or range of values (e.g. the subject is between 5'10" - 6'2"). Also, Single Image Photogrammetry needs a reference point in the same orientation as the subject / object. Additionally, it needs that reference to be close to the thing measured. As distance from the reference increases, the error potential (range) increases.

Thus, for "headlight spread pattern" analysis, what is the nominal resolution of the "pattern?" If the vehicle is in motion, how is motion blur mitigated? Given all of the variables involved and the nominal resolution within the target area (which is also variable due to the perspective effect), the "pattern" would rightly become a "range of values." If it's a "range of values," how can results derived from convenience samples be declared to "match" or be "consistent with" some observed phenomenon? Analysts are employing this technique in their work, but no validation exists - no studies, no peer-reviewed published papers, nothing. Wouldn't part of validating the method mean using the old rules - analytical trigonometry - to check one's work.

The same situation exists for the demonstrative aid produced by the mixed-methods approach of blending CCTV stills or videos (a capture of then) with 3D point clouds (a capture of now). The new approach must account for all the old data. To date, the few limited studies in this area have used ideal situations and convenience samples. None have used an appropriate sample of crappy, low priced DVRs in their studies. For example, Meline and Bruehs used a single DVR and actually tested the performance of their colleagues (link) not the theory that the measurement technique is valid. In their paper, they reference a study that utilized a tripod-mounted camera deployed in an apartment's living room to "test" the author's theory about accurate measurement. The author employed a convenience sample of about 10 friends and the distance from the camera to the subjects was about 10' - it was done in a living room in someone's apartment ffs.

I don't want to make the claim that the purveyors of these techniques are engaged in "fake science." I tend to think well of others. I think perhaps they're engaged in "mistaken science," unintentionally deceiving themselves and others.

We can reform the situation. We can insist that our agencies and employers make room their budgets for research and validation studies. We must publish the results - positive or negative. We must test our assertions. In essence, we must insist that the work that we perform is actually "science" and not "forensics" (rhetoric).

If you'd like to join me in this effort, I'd love to build a community of advocates for "accepted science." Feel free to contact me for more information.

No comments: