There is a particular failure mode that afflicts intelligent, well-informed people more than anyone else. It is not ignorance; they have the data. It is not a lack of analytical skill; they can follow the arguments. It is something more structural: an inability to feel how fast things are actually moving, even while holding the evidence in both hands.
It has no clinical name, but it deserves one. Call it human amnesia. Not the clinical kind, but the civilizational kind. A recurring inability to internalize, not that change happens, but how quickly change can compound once it starts.
We have been here before. Several times. And each time, the people living through it were equally certain that transformation of that magnitude was either impossible, exaggerated, or comfortably distant. They were not stupid. They were using heuristics built for a slower world. So are we.
The Die-of-Shock Test
Tim Urban, in his widely read Wait But Why essay on the Law of Accelerating Returns, proposed a thought experiment that is hard to shake once you hear it.
Take a person from the year 1750 and drop them into 2026. The experience would not merely be disorienting. It would be incomprehensible. Electric light. Aircraft. Smartphones. Surgery under anesthesia. A species that walked on the moon and then got bored of it. Urban's argument is that the shock would be so total, so beyond the visitor's cognitive framework, that it might literally kill them.
Now ask a different question: how far back in time would that person from 1750 need to travel to inflict the same level of shock on someone else? Not a hundred years. Not five hundred. Urban's answer is roughly 12,000 years, back to the world before agriculture, before cities, before writing. You would need to cross the entire boundary between settled civilization and nomadic hunter-gatherer life to find a gap wide enough to match what 250 years of acceleration has produced.
Here is where it gets uncomfortable. If progress is genuinely exponential, then the amount of change needed to produce that same die-of-shock moment keeps shrinking. The jump from 10,000 BCE to 1750 took twelve millennia. The jump from 1750 to today took 275 years. Urban suggests the next equivalent jump might take only 25 or 30 years. Maybe less.
The thought experiment is imprecise, and it is meant to be. Its power is not mathematical. It is perceptual. It forces you to confront a pattern that your brain keeps trying to flatten: the distance between "where we were" and "where we are" is getting shorter with every cycle, and the current cycle is the shortest yet.
A person from 1775 lived in a world of horses, candlelight, handwritten letters, and seasonal farming rhythms. They were genetically and cognitively identical to us. Same brain, same hardware. What separates them from us is not biology but context: 250 years of compounding innovation that would overwhelm every mental model they had for how the world works.
Now consider that someone from 20,000 years ago, deep in the pre-agricultural era, was also anatomically modern. Same brain architecture. Same capacity for language, tool use, and abstract reasoning. They could, given enough time, learn to navigate the modern world. But the initial shock of encountering it, with no framework for cities or writing or electricity or digital networks, would be orders of magnitude beyond what our 1775 visitor would face.
The point is not that these people were less intelligent. The point is that the human mind, across all of these eras, processes change at roughly the same speed. The world does not. And the gap between how fast we can adapt and how fast things are actually moving is the central tension of this moment.
The Accelerating Compression
The die-of-shock thought experiment has teeth because the historical record backs it up. Each major transformation in human history has followed the same pattern, only faster, and the acceleration is not subtle.
Agriculture (beginning around 10,000 BCE) took roughly seven thousand years to produce the mature civilizations of Egypt and Mesopotamia. It was transformative beyond measure, reshaping not just society but human biology itself. Research published in Proceedings of the National Academy of Sciences (2025) found that the shift to dense farming settlements created new evolutionary pressures almost immediately: the gene FUT2 began evolving as enteric pathogens flourished in crowded villages, with loss-of-function mutations spreading rapidly through populations. Skeletal evidence from early agricultural sites shows increased disease, nutritional deficiency, and reduced stature. Agriculture changed what humans were. But it unfolded so slowly that no single generation perceived the arc. It was just the world.
Industrialization compressed a comparable scale of disruption into about eighty years. Computing compressed it further, with the revolution accelerating within itself: forty years from Turing's paper to personal computers, fifteen more to the commercial internet, less than a decade from the iPhone to smartphones becoming the infrastructure of daily life.
And then AI. ChatGPT launched publicly forty months ago. In that window, large language models went from writing passable essays to executing complex multi-step workflows, writing production-quality software, passing professional licensing exams, and exhibiting reasoning capabilities that the field's own researchers did not expect to see this soon.
A caveat is necessary here, and it needs to be more than pro forma. The rows in this table are not measuring the same thing. The first three rows track the time from a technology's emergence to its deep restructuring of how societies function: how people work, live, eat, and relate to one another. The last row measures something narrower: the interval between a consumer product launch and a set of capability benchmarks achieved in laboratory and controlled settings. These are genuinely different categories. A model passing a licensing exam is not the same kind of event as agriculture reshaping human skeletal structure or electrification redesigning the factory floor.
What the table does show, and this is not trivial, is the pace of capability development within the current cycle. Even granting that demonstrated capability and civilizational transformation are different things, the speed at which the underlying technology is advancing is historically unprecedented. Whether that capability translates into structural change at a comparable speed is a separate and genuinely open question—one the next section addresses directly.
The Strongest Case for Skepticism
Before diagnosing why people underestimate AI, it is worth taking the skeptical position seriously. Not the lazy version found on social media, but the substantive one advanced by serious researchers.
Yann LeCun, Meta's chief AI scientist, has argued persistently that current transformer-based architectures lack grounded world models. They predict text. They do not build causal representations of reality the way embodied cognition does. He is probably right that there is a meaningful gap between fluent language production and genuine understanding, and that this gap matters for the kind of robust, generalizable intelligence that would constitute AGI. This is not goalpost-moving. It is a substantive technical critique, and dismissing it is as intellectually lazy as dismissing AI progress itself.
There is also what economists call the deployment gap: the historically documented lag between a technology's existence and its restructuring of the world. Electricity was clearly superior to steam by the 1880s. It took forty years for factories to be redesigned around electric motors, because transformation requires not just a better tool but new infrastructure, new workflows, new organizational logic, and new generations of workers trained differently. Robert Gordon, the economist, has built an entire research program around the argument that transformative technologies deliver less economic impact than their enthusiasts predict, and deliver it more slowly.
These are real observations. They deserve weight. But they have limits, and understanding where those limits are is where the interesting analysis begins.
Where Skepticism Breaks Down
The strongest skeptical arguments share a common structural flaw: they treat the current snapshot as the ceiling.
"Transformers can't build world models" may be true of today's architectures. But the field is not frozen on transformers. State-space models, hybrid architectures, neurosymbolic systems, and approaches not yet published are in active development across every major lab. Confusing the limitations of the current best tool with the ceiling of the entire enterprise is the skeptic's most reliable error. It is the equivalent of arguing in 1995 that the internet would never be commercially important because dial-up was too slow.
The deployment gap argument is stronger, and it requires a more careful response than AI optimists typically give it. The standard rebuttal—that software does not require physical infrastructure the way electricity did, so adoption will be faster—is true at the distribution layer but insufficient as a complete answer. When a new AI capability emerges, it can reach a billion users within days through an API update. The friction of wiring, generators, and redesigned factory floors genuinely does not apply.
But the deployment gap was never only about physical infrastructure. It was also about organizational redesign, regulatory adaptation, workforce retraining, and institutional trust-building—all of which are deeply relevant to AI and none of which move at API speed. Healthcare systems will not restructure their diagnostic workflows in a quarter. Legal frameworks will not absorb AI-generated evidence overnight. Educational institutions, government agencies, and corporate hierarchies all carry their own inertia, and that inertia is real.
The honest assessment is that the deployment gap likely applies with partial force. The technological distribution layer is genuinely faster than any previous cycle. The institutional absorption layer is not. What makes this moment distinct is that these two timescales are more decoupled than in prior transitions. Electricity required both physical and institutional change, and they moved at roughly similar speeds. AI requires institutional change but not physical infrastructure, which means capability can outrun adoption in ways that create their own pressures: competitive dynamics that force faster institutional adaptation than organizations would otherwise choose, regulatory frameworks scrambling to catch up to deployed systems, and a growing gap between what the technology can do and what society has decided it should be allowed to do.
This is not the same as saying the deployment gap does not exist. It is saying that the gap operates differently here, and that "differently" may mean "shorter" without meaning "instant."
And then there is the pattern that repeats across every cycle of AI progress: the retreating benchmark.
When AI could not play chess, chess was the test of intelligence. When it mastered chess, the bar moved to Go. When it mastered Go, the bar moved to natural language fluency. When models became fluent, the bar moved to reasoning. When reasoning improved, the bar moved to consciousness. The target is always one step ahead of the demonstrated capability, and each retreat is framed not as a concession but as a clarification of what was really meant all along.
A fair objection: sometimes the goalposts move because earlier benchmarks genuinely were poor proxies for what matters. Chess was not a good test of general intelligence, and recognizing that is intellectual refinement, not retreat. But the distinction between genuine refinement and defensive repositioning is worth tracking, and the pattern as a whole should make us cautious. When we say "it still can't do X," the historical record shows that X has a shelf life measured in months, not years. The specific prediction may be technically sound. The implied confidence that X is the barrier—the one that will finally hold—almost never is.
Consciousness Is Not the Question (But It Is a Question)
There is no scientific evidence of consciousness in any current AI system. The question of whether there is something it is like to be a large language model remains genuinely open. Current science does not have a reliable method to answer it, and anyone claiming certainty in either direction is overstating what the evidence supports.
But consciousness and capability are different questions, and conflating them is one of the most common ways this conversation goes wrong.
You do not need consciousness to write functional code. You do not need subjective experience to identify a security vulnerability. You do not need qualia to automate a workflow that previously required a team of analysts. The practical question—what can these systems do and how fast is that expanding—has a clear empirical answer that does not depend on resolving the hard problem of consciousness.
Using the unsettled metaphysics as a reason to avoid engaging with the capability trajectory is not philosophical rigor. It is avoidance wearing rigor's clothes.
That said, consciousness matters for reasons that go beyond this essay's scope: questions of moral status, rights, and the ethical frameworks we will need if and when the line between sophisticated information processing and genuine experience becomes harder to draw. Those are not distractions from the AI conversation. They are a different, equally important part of it. But they do not need to be resolved before we take the capability trajectory seriously, and using them as a prerequisite for engagement is a form of delay that the pace of development does not accommodate.
What 2026 Looks Like From Inside
As of this writing, the frontier AI labs are in an open capability race. Each new model generation arrives with meaningfully expanded abilities. Internal safety evaluations, the ones labs publish alongside their model cards, flag increasingly serious risks with each release, particularly around cybersecurity, autonomous operation, and persuasion. The language in these evaluations has shifted notably over the past two years: from "potential concerns" in 2023, to "significant risks" in 2025, to formulations that suggest the safety teams themselves are being surprised by what they find in their own models.
Markets have noticed. The investor class, whatever its other failings, has strong incentives to price capability accurately. The capital flowing into AI infrastructure through 2025 and into 2026 reflects a conviction that something real and structural is happening. Not hype-cycle real. Restructuring-the-economy real.
Meanwhile, the public discourse remains stuck in a loop. Each new capability is met with a brief wave of astonishment, followed by rapid normalization ("it's impressive, but…"), followed by a reanchoring to the prior baseline. The ratchet turns, and then we forget it turned. This is human amnesia in action. Not forgetting that progress happened, but failing to feel the cumulative weight of sequential advances that never pause long enough for recalibration.
The Perceptual Mismatch
The deepest problem is not technical. It is not economic. It is perceptual.
Compound growth is one of the hardest things for the human mind to intuitively grasp. We evolved to track predators at walking speed, to notice seasonal change over months, to plan around harvests. We did not evolve to feel the force of a curve that doubles every year, or a capability frontier that advances every quarter.
This is the thread that connects the person from 20,000 BCE, the visitor from 1775, and you reading this sentence. The same cognitive hardware running in all three. The same processing speed for change. The only difference is how fast the world outside the skull is moving. And right now, it is moving faster than it ever has.
The agricultural revolution unfolded over millennia. No one experienced it as a revolution; it was just the world slowly becoming different. The industrial revolution unfolded over decades. Most people recognized it as disruption but not as total transformation until it was already behind them. The computing revolution unfolded over years, and we are still, thirty years later, debating what it meant.
The AI acceleration is unfolding over months. And the most predictable thing about it is that the debate over whether it is real will not be resolved before the next wave arrives to make the question moot.
The Honest Position
None of this means the future is settled, or that the most breathless predictions will prove correct. Transformative technologies can stall. Scaling laws can hit walls. Regulation can slow deployment. The path from "impressive lab demo" to "restructured civilization" is never as straight or as fast as the enthusiasts promise.
But the skeptical default—"it's overhyped, it's a bubble, it's just autocomplete"—is no longer a defensible resting position. It requires ignoring too much evidence, discounting too many demonstrated capabilities, and relying on intuitions calibrated to a world that no longer exists.
The honest position in March 2026 is uncertainty. Not the comfortable kind that lets you return to prior beliefs undisturbed, but active uncertainty: the kind that demands engagement. Specifically: the speed of capability development is no longer in serious doubt. The demonstrated abilities of frontier models are empirical facts, not hype. What remains genuinely uncertain is the translation layer—how fast, and how completely, capability reshapes institutions, labor markets, power structures, and daily life. The deployment gap is real, but it operates differently for software than for industrial technology, and we do not yet know how much shorter "differently" means.
We have been here before. We forgot every time. The only question is whether, with the compression now measured in months instead of centuries, we can afford the luxury of forgetting again.
Sources:
- Tim Urban, "The AI Revolution: The Road to Superintelligence," Wait But Why (2015)
- Ray Kurzweil, "The Law of Accelerating Returns," Edge.org (2001)
- Neolithic agricultural revolution and FUT2 gene evolution, PNAS (2025), PMC/NIH
- Early agriculture's impact on human health, PMC/NIH
- Industrial Revolution timeline, Britannica, History.com
- Yann LeCun on world models and transformer limitations, Meta AI research, public statements 2022–2025
- Robert Gordon, "The Rise and Fall of American Growth" (2016)
- David, P.A., "The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox" (1990)
