Have you ever almost been in an accident and you just avoided it in a reflexive move that was instant but saved your life? Have you ever cracked a joke that was so in context and instant that everyone kept laughing for a while? How much time did you spend thinking about those? I thought so. This is the paradox: your highest-level thinking feels like no thinking at all. The Lie That Burns Your Life The Lie That Burns Your Life 99% of people believe: more thinking = better outcomes. more thinking = better outcomes More hours studying = better grades. Longer meetings = better decisions. Bigger books = more knowledge. More analysis = less risk. All catastrophically wrong. In 1996, cardiologist Brendan Reilly at Cook County Hospital faced a crisis. His emergency room was drowning in chest pain patients. Doctors were running elaborate tests, asking dozens of questions, ordering multiple scans. They were thinking hard. And they were wrong 50% of the time (which isn’t better than a coin toss); either sending heart attack victims home or admitting people with heartburn. hard A researcher named Lee Goldman created an algorithm. It looked at a few variables: specific ECG patterns, fluid in the lungs, and unstable blood pressure. variables Accuracy jumped to 95%. The doctors hated it. They refused to believe something so simple could outperform their expertise. “Medicine isn’t a cookbook,” they said. “You can’t reduce the complexity of the human body to six numbers.” For years, nobody used it. The algorithm sat in a journal while emergency rooms kept getting it wrong. Then a desperate hospital in the mid-2000s, facing malpractice suits and overwhelmed staff, finally implemented it. It worked. Today, variations of that algorithm are standard. The doctors who knew more, who thought longer, who considered more factors—they performed worse. Because they couldn’t tell signal from noise. Because they couldn’t tell signal from noise. They were computing with 50 variables when only 6 mattered. This is the true cost of low-quality compression: you die from information overdose. You often learn more from the right 1% than from the entire 100%. A forensic scientist learns more from one strand of hair than from interviewing witnesses for a week. A master chef knows if a dish works from one bite, not from eating the whole plate. An experienced trader makes better decisions in 30 seconds than a novice does in 30 hours. The difference isn’t thinking speed. Your brain is using models of sounds, images, smell, taste, and feel right now to process this. There is no faster format to think in, because any alternative would require translation—and translation is slow and lossy. The difference between fast and slow is compression quality. compression quality Words take longer to absorb or process because they are a combination of images + sound + feel, and a picture is faster because it’s a direct element of your perception. When More Becomes Less When More Becomes Less Reality is viciously nonlinear. But our intuitions are stuck in linear mode. Books: You think a 500-page book is “better” than a 200-page book because more pages = more knowledge. But the length of a book is imprecise. Metrics like font size, spacing, and margins can make a physically shorter book appear to contain more volume, and vice versa.. You don’t measure books by pages because it’s the best metric. You do it because pages are easy to count. The real metric is insight per unit of attention. But that’s harder to measure. So you optimize for the wrong thing and read bloated books that could’ve been essays. insight per unit of attention Sapiens (443 pages) teaches you more about human history than most 1,000-page textbooks because Yuval Harari compressed better. The value is in the compression, not the volume. Sapiens Relationships: Relationships: You think talking to someone for a year means you understand them. Then you walk into their bedroom and learn more in 30 seconds than a year of conversation taught you. The compression is in the environment they constructed, not the time spent. Analysis: You think spending more time on a decision makes it better. Past a certain point, you’re not gaining signal—you’re adding noise. You’re inventing reasons to worry. You’re mistaking exhaustion for thoroughness. The pattern is everywhere. More is not better. Better compression is better. More is not better. Better compression is better. Most people never learn to see the curve. They’re stuck thinking linearly in a nonlinear world, burning their lives on the wrong variables. The Two Engines Running Your Skull The Two Engines Running Your Skull Your brain runs two systems. (Shout out to Daniel Kahneman) System 1: Fast, automatic and compressed. This is your reflex to swerve. Your instant joke. Your gut feeling about a person. It runs on patterns you’ve already encoded. System 1: System 2: Slow, deliberate and expensive. This is you learning to drive, debugging code for the first time, working through a proof. It builds new patterns. System 2: Here’s what you’re getting wrong: you think System 2 is always better because it feels like what society has conditioned you to consider as “real thinking.” you think System 2 is always better because it feels like what society has conditioned you to consider as “real thinking.” It’s not. System 2 is a construction tool. Its job is to build compressions that System 1 can execute automatically. Once built, System 1 is faster, and costs less cognitive energy (glucose). The goal isn’t to always use System 2. The goal is to use System 2 to build better System 1 compressions, then get out of the way. use System 2 to build better System 1 compressions, then get out of the way The cardiologist with the 6-variable algorithm isn’t thinking less than the one considering 50 variables. He compressed the right variables into System 1. Now he can see the pattern instantly while his colleague is still deliberating. You think right, once, and compress it. Once you’ve compressed your algorithms and models properly, you can teach external computers (like current AI) to automate them and never have to think about them again. (I built a kit for exactly this—more on that later.) right Mastery is knowing which system to use when. And to know that, you must first know which game you’re even playing. which when The Game You’re Not Seeing The Game You’re Not Seeing There are two types of games: Deterministic Games: Input A → Outcome B. Every time. Baking a cake. Solving an equation. Following a recipe. In these games, effort finds the formula, then the formula replaces effort. Deterministic Games: Probabilistic Games: Hidden variables. No guaranteed formula. Startups. Dating. Investing. In these games, volume (sample size) is leverage, not perfection. Probabilistic Games: Most people play probabilistic games like they’re deterministic—spending months perfecting one approach. Or they play deterministic games like they’re probabilistic—trying random things instead of finding the formula. You can’t see the difference unless you can first identify the game you’re in You can’t see the difference unless you can first identify the game you’re in And you can only see that if you’ve compressed enough patterns to recognize the structure. This is why experienced people look like they’re not trying. They’ve already identified the game type. They know which variables matter. They know when to think hard (building the compression) and when to not think at all (executing the compression). Don’t waste your life trying harder in a deterministic game when you should be looking for the formula, and don’t waste your life looking for a formula in a probabilistic game when you should increase your sample size and do hard obsessive work. What Separates Us from Every Other Species What Separates Us from Every Other Species Many animals have System 1 and System 2. Hawks have lightning reflexes and solve novel problems. Crows use tools. Octopi learn by observation. Our advantage isn’t having these systems. It’s that we externalized System 2 into a shared, cumulative project. externalized System 2 into a shared, cumulative project We invented compression tools: language, writing, mathematics, notation, code. These tools let us: Store compressions outside our brains Share them with others Criticize and improve them across generations Combine them into higher-order compressions Store compressions outside our brains Share them with others Criticize and improve them across generations Combine them into higher-order compressions Aristotle’s System 2 work was compressed into text. For 2,300 years, other System 2s have attacked it, refined it, built on it. We’re participating in a multi-millennial compression project. Other species can’t do this. Not because they’re stupid. Because they lack the infrastructure for cumulative compression. infrastructure for cumulative compression A chimp can learn to use a stick for termites. Impressive. But that’s copying a System 1 output (the behavior), not transmitting a System 2 explanation (why sticks work, how to improve stick design, a theory of leverage). Without symbolic systems, knowledge can’t: Be stored beyond individual memory Be criticized by others who weren’t there Be refined across generations Be combined with other knowledge to create new compressions Be stored beyond individual memory Be criticized by others who weren’t there Be refined across generations Be combined with other knowledge to create new compressions Each generation starts near baseline. Knowledge flashes and fades. We escaped this trap. We built the infrastructure to make our System 2 compressions immortal. That’s why you can absorb the thinking of Confucius in 2025. That’s why knowledge compounds instead of resets. We escaped this trap. Why the Infrastructure Never Emerged Elsewhere Why the Infrastructure Never Emerged Elsewhere The Lifespan Red Herring The Lifespan Red Herring Even 200-year-old chimps wouldn’t spontaneously invent writing. Longevity helps use symbolic systems (more time to learn and contribute), but doesn’t create them. The bottleneck isn’t time in the game. use create Raw Intelligence Isn’t the Constraint Either Raw Intelligence Isn’t the Constraint Either Crows solve multi-step problems. Dolphins communicate complex information. Octopi learn by observation. Intelligence exists. The infrastructure doesn’t. The Missing Layer is Explanatory Reach The Missing Layer is Explanatory Reach Humans form theories that extend beyond immediate experience. We don’t just notice fire is hot. We ask why. We build models of combustion, energy, thermodynamics. We compress “fire is hot” into theories so general they explain stars. why Other animals seem locked in bounded domains. They solve immediate problems brilliantly but don’t abstract those solutions into universal theories. They don’t ask “what if?” or “why must it be this way?” Our exponential growth of knowledge required: Conjecture - proposing explanations beyond direct observation Criticism - testing explanations to find their failures External memory - storing explanations in transmissible form Cumulative error-correction - cultural practice of building on past criticisms Conjecture - proposing explanations beyond direct observation Conjecture Criticism - testing explanations to find their failures Criticism External memory - storing explanations in transmissible form External memory Cumulative error-correction - cultural practice of building on past criticisms Cumulative error-correction Once you have all four, you get runaway knowledge creation. You get science. Civilization. The ability to compress 2,000 years of philosophy into an essay you read in 10 minutes. Animals have pieces. Not the full stack. And without the full stack, compression doesn’t compound. Animals have Layer 1. They might hint at Layer 2. They completely lack 3 and 4, so any knowledge they create evaporates in the next generation. Animals have Layer 1. They might hint at Layer 2. They completely lack 3 and 4, so any knowledge they create evaporates in the next generation. The Asymmetry Problem with Current AI The Asymmetry Problem with Current AI Current AI is architecturally lopsided. Large language models are System 1 monsters. They pattern-match at inhuman scale. They remix, recombine, generate. They propose at superhuman speed. But they have no true System 2. No taste. No elegance bias beyond “what was statistically common in my training data.” They can’t say “this explanation is better than that one” for principled reasons. They optimize for what’s been rewarded in their training data. They can’t step outside and criticize the reward itself. This is why they hallucinate with confidence, can’t tell you when they’re wrong, and generate technically correct but meaningless outputs. The danger isn’t AI becoming conscious. The danger is you outsourcing your System 2 to something that doesn’t have one. you outsourcing your System 2 to something that doesn’t have one If you let AI decide what’s true, what’s elegant, what matters—you’re handing your sacred function to a system with no theory of beauty. No skin in the game. No values. You’re abdicating the one thing that makes you irreplaceable. How to Actually Think Fast How to Actually Think Fast Find the three variables that matter. Find the three variables that matter. Find the three variables that matter. The heart attack algorithm. The forensic hair strand. The master chef’s one bite. Your job is compression, not exhaustion. Stop computing with 50 variables when 6 will do. Identify which game you’re in. Identify which game you’re in. Identify which game you’re in. Deterministic game? Find the formula, compress it, execute automatically.Probabilistic game? Stop looking for a formula. Increase sample size. Use System 2 to build System 1, then step aside. Use System 2 to build System 1, then step aside. Use System 2 to build System 1, then step aside. When learning something new, think slowly and deliberately. Once you’ve compressed it into a pattern, stop re-thinking it. Trust the compression. That’s what mastery feels like. Compress the right 1%, ignore the 99%. Compress the right 1%, ignore the 99%. Compress the right 1%, ignore the 99%. You learn more from someone’s bedroom than a week of interviews. More from one strand of hair than witness testimony. More from the right variables than 50 variables. Find the compression. Ignore the noise. Externalize your System 2. Externalize your System 2. Externalize your System 2. Write it down. Code it. Diagram it. Make your compressions criticizable and improvable. This is how you participate in the multi-generational project. This is how your thinking survives your death. Use AI as your System 1, never as your System 2. Use AI as your System 1, never as your System 2. Use AI as your System 1, never as your System 2. Let it generate. Let it propose. Then apply your judgment ruthlessly. Your taste. Your theory of elegance. Your values. Once you’ve compressed your models, teach AI to automate them (I built the Fool-Proof AI kit for this). The fastest way to think is to not spend energy re-solving problems you have already compressed. But first, you have to do the brutal System 2 work of building the right compressions. Then you have to externalize them so they compound. Then you have to contribute them to the infinite library of human explanation. Now go compress something that matters. Share this essay with others if you enjoyed it. Subscribe to my newsletter for more: https://crive.substack.com Enjoy the rest of your week.