How zero-cost, authoritative-sounding text is breaking institutional accountability
On January 16, the chief constable of West Midlands Police in the United Kingdom stepped down after
Under questioning, the chief constable acknowledged that part of the document had been drafted using
Much of the public debate has focused on bias, judgment, and individual responsibility, but the episode points to a structural problem that has been developing for a few years now.
From human diligence to synthetic diligence
For decades, modern institutions relied on an implicit assumption: if a document existed - especially one that looked formal, reasoned, structured - someone had spent time producing it. Reports, legal filings, safety assessments, and policy briefings were costly to generate and even low-quality work required hours of human attention. That cost function created an informal but reliable signal of accountability.
But Generative AI breaks that assumption.
Draft-quality pieces can now be produced in seconds, arguments and citations included, and look convincing even when the underlying claims are entirely fabricated or misinterpreted. The issue in this case is not that automated systems sometimes hallucinate; humans make mistakes, too. The issue is that institutions have no scalable way to distinguish between text produced by a person reasoning through a problem and text produced by a model optimised to mimic that reasoning.
As the cost of producing authoritative-sounding text begins to approach zero, institutions begin to deal with synthetic work faster than they can verify it. Safety assessments, legal briefs, student essays, internal reports, and consultancy deliverables all start to look finished long before anyone has actually done any of the work implied by their appearance.
That fluency substitutes human judgment and verification becomes the new bottleneck.
Where failures are already visible
The West Midlands case is not an isolated one and similar failures are already forcing adjustments across institutions: courts, universities, government bodies, professional services, and even journalism have all been caught out.
Courts
Judges in several jurisdictions have sanctioned lawyers for submitting filings containing AI-generated, non-existent case law. In the United States, the
Universities
Higher education institutions
Some departments have reintroduced handwritten or supervised exams, expanded oral assessments, and shifted evaluation into in-person settings. Oxford’s Faculty of Medieval and Modern Languages
Public bodies
Governments are beginning to formalise disclosure and auditability requirements for algorithmic tools. In the UK, the Algorithmic Transparency Recording Standard (
Private sector
The private sector is encountering the same problem, often with direct financial consequences. In Australia, Deloitte produced a
Similar episodes have surfaced elsewhere. Media outlets including CNET and MSN have retracted AI-generated articles containing
Across these cases, we see a consistent pattern. Institutions assumed that efficiently produced text was a reliable signal of underlying work. But that assumption no longer holds.
Why institutions are adding friction
The emerging responses - manual attestation, in-person assessment, disclosure requirements, limits on undeclared AI use - can look like resistance to innovation. It is not. It is an attempt to restore a basic institutional function we still rely on: linking text to responsibility.
When verification capacity is scarce, adding friction is rational rather than Luddite. If an organisation can generate more documents than anyone can realistically check, it accumulates decisions that no one can truly own. Over time, that erodes internal trust and external legitimacy: colleagues stop believing that reports reflect real expertise. Courts, regulators, and the public lose confidence that official records rest on accountable judgment.
The West Midlands episode illustrates this dynamic clearly. The political fallout was not caused solely by an incorrect reference. It was caused by the revelation that a document carrying real consequences had entered an official process without anyone being able to say, with confidence, who - if anyone - had verified it.
The structural change coming
Generative AI does not simply make institutions faster. It changes what is scarce: production is now abundant, verification is not.
And that shift requires a redesign of institutional workflows. Provenance - how a document was produced, who edited it, who checked it, and who stands behind it - now needs to become explicit rather than assumed. Some categories of work will need clear boundaries where identifiable human authorship remains non-negotiable. Others may accommodate automation, but only within review limits that match available oversight.
This is not a temporary adjustment. Synthetic diligence is cheap and convincing, and failures like the one in West Midlands are likely to continue to happen. Each event will test public trust - in AI tools and, more importantly, in institutions and their safeguards.
The institutions that adapt will be those that accept a slower, more verification-centric mode of operation in high-stakes contexts. Those that don’t will continue to produce documents that look finished - until the moment they are forced to explain who actually did the work.
Lead image credit: AdobeStock |132785912
