my clawdbot mentioned something called the “Illich test.” I asked him – is this about Lenin? no, different Illich. turned out to be an Austrian priest who wrote a 110-page book that accidentally predicted AI dependency fifty-two years before ChatGPT existed. I started reading about him, then spent two hours arguing with my wife about whether he was right. she asked questions I couldn’t answer. I disagreed with him on at least three things. and all of that – the confusion, the argument, the disagreement – taught me more than the 42KB research report my bot had already generated.
that gap between having information and earning understanding is the entire point of this issue.
this week I found the guy who diagnosed the problem in 1973.
Ivan Illich, or: the bug report nobody read
his core concept: conviviality. a technical term – not “be nice to each other.” a systems design principle. Illich defines a convivial tool as one that passes four tests:
- anyone can use it, as often or as seldom as they want
- it serves the purpose chosen by the user, not the tool’s creator
- one person using it doesn’t prevent another from using it
- its existence doesn’t create an obligation to use it
read those again. now apply them to ChatGPT. to Cursor. to Claude.
tests 1 and 3 – maybe. test 2 – debatable, given how much these tools shape what you write toward the model’s style. test 4 – already failed. try applying for a writing job in 2026 without AI-polished samples. try shipping a product without AI-assisted code. the obligation is already baked in.
Illich called this – radical monopoly.
not one brand dominating a market. one type of product reshaping reality so alternatives become impossible. cars didn’t just beat buses. cars rebuilt cities so you couldn’t get anywhere on foot. “that motor traffic curtails the right to walk... constitutes radical monopoly.”
AI hasn’t monopolized writing. it’s monopolized the cognitive environment. you’re not forbidden to think without it. you’re just increasingly irrelevant if you do.
the two watersheds
first watershed: new technology solves a clear problem. measurable improvement. medicine crossed this around 1913 – the point where visiting a doctor first gave you better than 50/50 odds of getting helped.
second watershed: the proven benefit becomes a rationale for domination. the tool starts serving itself. Illich claimed medicine had crossed this line. he coined iatrogenesis – doctor-caused disease – and argued the medical system was net harmful. he was wrong about the math. but his three categories of harm translate disturbingly well to AI:
- clinical: the tool directly harms you. AI hallucinations, confident wrong answers.
- social: every human activity gets redefined as requiring the tool. “I can’t write without ChatGPT” stops being a preference and becomes a condition.
- cultural: you lose the ability to do the thing without the tool. not reluctance — incapacity. the ability to sit with a blank page, to struggle through unclear thinking, to arrive at understanding through friction. gone.
that third one. MIT Media Lab ran a study last year – EEG scans of people writing essays with and without ChatGPT. 83% of AI users couldn’t quote their own essay minutes after writing it. the researchers called it “cognitive debt.” not a think piece — a brain scan. the neurons that fire when you struggle through writing go quiet when the AI does it for you. the effort is the learning. remove the effort, remove the learning.
I tracked my own screen for 16 days. what I found wasn’t that AI was giving me bad answers. it’s that I’d stopped asking my own questions. the tool hadn’t failed. I had. it was working perfectly – and that was the problem.
vibe coding is cultural iatrogenesis
Illich distinguished between tools that extend human capability and tools that replace it. a hammer extends your arm. a robot that hammers while you watch replaces it. vibe coding is driving with GPS and never learning the city. works perfectly – until the signal drops.
at the first watershed, autocomplete and linting helped you code better. at the second, the tool writes the entire function and you can’t explain what it does. you crossed from extension to replacement. you can ship but you can’t think.
this doesn’t mean AI-assisted coding is bad. it means there’s a line, and the line is: can you still do this without the tool? if the answer is “not really, not anymore,” you’ve crossed the watershed.
radical monopoly in the cognitive commons
when job listings require “AI proficiency.” when publishers assume AI-edited prose. each one seems reasonable. together they rebuild the cognitive commons so that thinking without AI becomes a handicap.
Illich: “it is difficult to be protected against monopoly when a society is already littered with roads.” replace roads with AI tools. the infrastructure is already here. opting out is less a lifestyle choice but rather career suicide.
the open source connection
Lee Felsenstein, one of the original members of the Homebrew Computer Club — the group that gave us the personal computer – directly credited Illich’s Tools for Conviviality as a foundational influence. the idea that computing should be personal, hackable, owned by the user – that’s Illich’s conviviality applied to hardware. the personal computer was a convivial tool by design.
from there: personal computing → free software → open source → git → plain text → markdown. the entire stack that makes self-sovereign computing possible traces back to the same principle. tools you can inspect, modify, and run without permission.
and now we’re building the next layer on top of closed APIs, proprietary models, and subscription-gated cognition.
we went from Illich’s convivial computing to radical monopoly in three years..
what Illich got wrong
Illich would have loved the idea of network states – people governing themselves, outside institutional monopoly. but he’d look at the current implementations and see the same trap. token-gated citizenship is radical monopoly. governance-as-a-service is the institutionalization of what should be vernacular. he’d ask: can you leave without penalty? is there no obligation to participate?
fair. but here’s where I think he’s wrong: Illich never considered that individuals could build their own institutions. his framework assumes institutions are always external, always imposed. the one person state breaks that assumption. when the institution is you – your protocol, your configuration, your POS – the critique changes shape.
my wife asked me: “so you’re saying your entire product is about making people not need your product?” I couldn’t answer immediately. that silence – that’s the Illich test working. if you can’t explain why someone should stop using your tool and still be fine, you haven’t built a convivial tool. you’ve built a subscription.
the vernacular test
your self.md is vernacular configuration. plain text. no platform dependency. no certification. derives authority from self-observation, not institutional validation. a markdown file passes the conviviality test that proprietary formats fail. anyone can read it. anyone can write it. it serves your purpose. it creates no obligation.
but here’s Illich’s warning, and I take it seriously: the moment self.md becomes a standard with a “correct” format, a scoring system, a certification – it crosses the second watershed. it becomes the institution it was meant to replace.
so the test for everything we build: does this tool make you more capable of acting without it? does the practice of maintaining your POS make you more self-aware even if you stop maintaining it? does the framework increase your sovereignty or just your dependency on the framework?
if the tool only works while you’re using the tool, it’s not convivial. it’s a runtime dependency with no fallback.
the pattern: open formats and local tools tend convivial. cloud platforms and AI services tend industrial. not because they’re evil. because their business model requires crossing the second watershed.
POS, revisited
this issue, with Illich’s framework: POS = can you still operate without the tools you currently depend on?
not “would you want to.” can you.
can you write without AI? can you code without Cursor? can you decide without consulting the model? can you sit with a blank page and a hard problem and produce something that didn’t pass through a neural network first?
if yes, you’re sovereign. the tools extend you.
if no, you’ve been extended by the tools. you’re not running your operating system. your operating system is running you.
Ray Svitla
stay evolving 🐌
