It’s been a while since I posted here.
I’ve moved on from daily forensic casework and stepped back from the blog to focus on broader systems work—on justice, equity, neurodiversity, and how we reckon with broken epistemologies. But sometimes something happens that drags you out of retirement whether you like it or not. And this is one of those moments.
The Electronic Frontier Foundation (EFF) just released a thorough, scathing analysis of Axon’s “Draft One”—an AI tool currently being used in police departments to automatically generate narrative reports from body-worn camera audio. The headlines focus on missing drafts and vanishing accountability, but from where I sit, this story goes much deeper. It cuts to the core of what forensic science is supposed to be—and the mechanisms by which truth, evidence, and responsibility are disappearing in real time.
This is not just an engineering failure. It’s a perversion of process, designed to undermine transparency and suppress ground truth. And unless we start shouting about it now, loudly and persistently, we may find the damage done before it’s ever tested in court.
So here I am. Let’s talk.
What Is Forensic Science, Again?
When I was serving on the OSAC Video/Imaging Technology and Analysis Task Group, we spent a fair bit of time just trying to define our terms. “Forensic science” may sound straightforward, but once you include all disciplines—from biology to acoustics to digital traces—you need language that holds across those domains.
The final agreed definition read (it's probably changed since then):
Forensic science is the systematic and coherent study of traces to address questions of authentication, identification, classification, reconstruction, and evaluation for a legal context.
Let’s break that down.
-
Systematic and coherent: It means you use a validated methodology. You test. You document. You repeat. You don’t just guess.
-
Study of traces: Not assumptions. Not vibes. Not hallucinated context from a black box algorithm.
-
For a legal context: Which means your findings must be discoverable, explainable, and reproducible in court.
Any tool—human or machine—that doesn’t meet those standards doesn’t belong in forensic science or in the justice system.
And yet Axon’s Draft One is being used to generate police reports that:
-
Can’t be traced to an identifiable author.
-
Can’t be audited for edits or AI influence.
-
Can’t be cross-examined in court, because the drafts no longer exist.
That’s not justice. That’s marketing.
The Death of the Draft
Let’s be clear: deleting original drafts isn’t some accidental oversight—it’s a deliberate design choice. A senior Axon product manager is on record stating that storing drafts would only “create more disclosure headaches” for their customers and attorneys.
In other words, the system is intentionally structured to evade scrutiny.
In civil proceedings, experts operate under strict obligations—FRE Rule 26 (or its UK counterparts) requires comprehensive disclosure: your notes, your methodology, your tools, your working files. Everything from the History Log in Photoshop to the process report in Amped FIVE is considered part of the evidentiary record and must be turned over. This ensures transparency and allows opposing parties to challenge the foundation of any claim.
But there’s no equivalent to Rule 26 in criminal procedure.
And that absence is where much of the injustice festers. In criminal cases—especially those resolved through plea deals—discovery is often minimal, accountability mechanisms are weaker, and tools like Draft One can be quietly implemented with no real oversight. The system permits it not because it’s just, but because it’s expedient.
When a narrative is generated by AI, lightly skimmed or rubber-stamped by an officer, and submitted without retaining the drafts, who’s actually attesting to authorship?
No one.
And in a system already tilted against defendants, that’s not just a technical gap—it’s a structural abuse.
Officers of the Court, Without Responsibility
When a police officer submits a report, they do so as an officer of the court. That’s not metaphorical. They’re bound by law and professional ethics to tell the truth, to the best of their knowledge, under penalty of perjury.
The legal assumption is that the report is theirs. They saw the event. They wrote it down. They stand by it.
But if Draft One creates a first draft—based on audio the officer might not even remember clearly—and that draft is then lightly edited (or not at all), who is really the author?
If the report contains bias, omission, or invention, is it the fault of the AI? Or the officer? And how would you know?
EFF is absolutely right to call this a “smokescreen”—one that could allow officers to deny responsibility while still reaping the legal authority of a sworn report. It’s the worst of both worlds: automation without accountability.
The Market Logic Behind the Curtain
Let’s not ignore the profit motive either.
Axon is a publicly traded company. Its duty is to shareholders, not truth. Draft One is being rolled out not because it meets evidentiary standards—but because it saves time, cuts labour costs, and locks departments deeper into the Axon ecosystem.
In a context where most cases end in plea deals—where defence teams rarely get the chance to challenge reports through discovery or trial—these AI-generated narratives may never face real scrutiny. They’ll become “truth” by default, untested and untestable.
This is how you change the epistemology of the criminal legal system without ever passing a law. You let a company rewrite the workflow.
And if no one objects loudly enough, that new workflow becomes the norm.
The Real Problem: Validation and Ground Truth
When I wrote my 2020 article on Ground Truth in Digital/Multimedia Forensics, I was wrestling with many of these same questions.
Validation isn’t optional. You don’t get to skip the experiment just because the vendor says their tool is “reliable.” You have to test—under real conditions, with real data, using accepted scientific methods.
And if your tool doesn’t allow for that kind of testing—if it deletes its own outputs, obscures its authorship, or refuses to log its transformations—then you are not practising forensic science.
You’re engaging in speculative automation.
You’re telling stories with invisible authors and calling them evidence.
A Final Word (For Now)
I never thought I’d see the day when officers could submit reports they didn’t write, generated by a system that deletes its drafts, under the banner of “efficiency”—and still have those reports carry the full weight of legal authorship.
But here we are.
And so, for the moment, I’m coming out of retirement. Because the integrity of the judicial system matters. Because ground truth matters. And because the legal system cannot afford to outsource authorship and accountability to a product designed to dodge both.
This isn’t innovation. It’s abdication.
And if we don’t push back now, we may find ourselves standing in courtrooms arguing with ghosts—armed only with final reports, stripped of history, speaking with no traceable voice.