In the latest edition of Evidence Technology Magazine (print version), there's an editorial that discusses the problems with experts in forensic science who often have no training in science or awareness of the basics of the scientific method. The author illustrates the point using the privateer working in their garage as an exemplar for shoddy practice. While the case might be made for these folks causing problems in the courts, they're easy to spot. Just ask some very specific questions - training, experience, recent case testimony, etc. Sure, they might know about computers. They might have a CIS degree. But, what training and experience do they have with DVRs? What's their procedure? Has it been validated? What tools do they use? Are they generally accepted amongst the industry's experts? Have those tools been validated? Etc.
Yet, as much as the problem of bogus experts might be, there's likely a worse problem lurking under the surface - the systemic fraud of "just trying to get something done" that is regularly employed in government agencies. If it's a fraud for untrained and poorly equipped privateers to try to pass off their work as relevant and reliable, is it not also a fraud when a law enforcement agency's employee does the same thing? Is it not systemic when the agency not only permits it to occur, but encourages it through a climate and culture of "just trying to get things done."
The root of this problem might just be the NYPD's CompStat program, a "data driven program" that has spread around the country. As employees rush to clear cases and pressure is exerted to "just get things done," can the public trust the stats that are generated in this environment?
"... what if the data were somehow skewed?
That question has emerged as one of the by-products of a survey conducted by two criminologists that has raised doubts about the integrity of the New York Police Department’s highly regarded crime tracking program, CompStat. Relying on the anonymous responses of hundreds of retired high-ranking police officials, the survey found that tremendous pressure to reduce crime, year after year, prompted some supervisors and precinct commanders to distort crime statistics."
So as I work away at eventually receiving a PhD, and sit through seemingly endless lectures on reliability and validity in dealing with statistics, there is (quite sadly) a gaping hole in the CompStat media campaign - validity and reliability studies from independent researchers. From what I've read on the issue, there's a huge co-mingling of causation / correlation narratives, but not a lot of reliable data supporting CompStat as valid and reliable.
Thus, if there's a pressure to perform, might there be a pressure to "just get something done" in the lab? Do you honestly think that if a whole agency is skewing the CompStat books - that pressure to cook the books does not exist in the lab? If they are skewing the data (and thus the results), is this any worse, or better, than the privateer in the Evidence Technology Magazine article? If government employees are ignoring the scientific method, using unproven/unvalidated tools and techniques, working outside of their scope of expertise, I would argue that it's worse, far worse.
No comments:
Post a Comment