We don’t just use AI. We believe it. That’s a subtle, but dangerous shift. We ask language models to tell us where an image comes from, whether a quote is real, or if someone’s argument makes sense. And when the model replies—calm, articulate, and certain—we nod in agreement. Because it sounds right. But what if it isn’t? Machines Don’t Know. They Predict. Let’s be honest: large language models don’t know anything. They don’t verify. They don’t investigate. They don’t fact-check. know They predict the most statistically probable output based on their training data. So when we ask: “Where is this photo from?” “Who said this?” “Is this true?” They don’t look it up. They guess. They use probability—not proof. That’s not inherently bad. It’s what makes LLMs useful. But it becomes a problem when we stop asking if the output is true—because it feels true. Confirmation Bias, Now in AI Form We’re already prone to confirmation bias—our tendency to trust things that align with what we already believe. Now add AI to the mix. A system that sounds smarter than us. That doesn’t hedge. That speaks in absolutes. If you think a photo looks American, and Grok tells you it’s from Alabama? You believe it—even if it’s really from Spain. If you think a headline sounds fake, and ChatGPT agrees? You don’t double-check. It’s not just that we outsource research. We’re starting to outsource doubt. The Cognitive Debt of AI Dependence A recent study from MIT’s Fluid Interfaces group looked into what happens in our brains when we write essays with AI assistance. When people relied on LLMs, their brain activity literally decreased. - Lower alpha and beta wave connectivity - Reduced memory recall - Less engagement in critical thinking - Less ownership over their work The researchers call it “cognitive debt.” “cognitive debt.” You get short-term convenience. You pay with long-term atrophy. “Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.” — MIT, Your Brain on ChatGPT (2025) MIT, Your Brain on ChatGPT (2025) Your Brain on ChatGPT This isn’t science fiction. It’s peer-reviewed neuroscience. We’re Forgetting to Think We’re Forgetting to Think If you’re old enough, you remember the moment we stopped memorizing phone numbers. Then we stopped calculating tips. Then directions. Then birthdays. Now we’re letting AI interpret images for us. Write arguments for us. Build “truths” for us—without us. We’re not just using AI. We’re not just using AI. We’re letting it reshape how we reason. We’re letting it reshape how we reason. We’re Better Than That We’re Better Than That You don’t need to go full Luddite. But you do need to stay awake. do Ask questions. Cross-check. Use AI as a tool—not a teacher. And maybe, just maybe, next time the chatbot gives you an answer that feels a little too satisfying… too pause. And do something very old-fashioned: Think for yourself. Think for yourself.