Listen to this story
Freewheeling reflections on language, technology, and writing in the age of AI.
The writer is smart, but don't just like, take their word for it. #DoYourOwnResearch before making any investment decisions or decisions regarding your health or security. (Do not regard any of this content as professional investment advice, or health advice)
This piece was written for humor or satire and may include nonfactual statements, stories, or anecdotes.
The issue with Large Language Models—capitalized here the way you might capitalize God or Death, given the mission-critical importance the tech industry now attaches to them—is not that they generate text. That part is almost endearingly quaint, cute even. So 2022.
The real conundrum I’m racking my brains over, dear HackerNoon reader, is more unsettling. Like the feeling you get when you realize you've been on autopilot for the last two hours doing 80 on I5.
I wonder: Have I been living as an algorithm all this time, long before Large Language Models started autocompleting my thoughts? Is generative AI, in replicating the ways we write, also exposing the mechanical nature of our cognition?
We’re told that Large Language Models don’t write. Not in the sense Shakespeare authored plays or you wrote weepy yearbook love notes to your 10th-grade crush.
They predict. That is, they harvest the statistical likelihood of bite-sized tokens appearing in certain patterns, then serve them back to us in arrangements that feel like thought but are, in fact, just a simulation of actual thinking.
This raises a disquieting question: How much of human writing was already just…this? How often are we not writing but predictively assembling, our choice of words a game of Tetris played with borrowed patterns, phrases, and unconscious mimicry of established rhetorical forms?
What if the real heartburn-inducing revelation here is not that Large Language Models can imitate us but that what we call “us” was machine-like all along?
Strangely enough, if you break down the writer’s process, or at least this writer’s process, it starts to look a lot like what Large Language Models do. Less an intuitive leap of imagination, and more a matter of scanning memory for the next most probable word based on context and experience.
Many of us like to imagine it as some arcane, deeply human endeavor, a wrestling match with the Muse. A dance of inspiration and struggle and of bending language into something beautiful and telling.
But isn’t writing just a series of micro-predictions? Are we not reaching for words not through divine inspiration but through exposure and pattern recognition?
So, when Large Language Models do the same thing—just with a larger training corpus and fewer identity crises—is it really so different? Isn’t it doing what we’ve always done, only faster and at scale, and without the burden of writer’s block or imposter syndrome?
And if writing has always been an act of sophisticated pattern prediction, what does that say about thinking? Is it possible that Human Consciousness is not the ineffable Hard Problem we think it is?
I wonder if the seemingly novel idea I just had is just a probabilistic response to stimuli, a calculated extrapolation of everything I’ve ever read, heard, and been told to believe.
Maybe the real threat of generative AI is not that it will replace me, but that it forces me to confront the unsettling possibility I was never as original as I thought I was.
Of course, humans cling to the idea of uniqueness. We resist the notion that creativity can be mechanized because creativity is, well, what makes us human. We tell ourselves that AI cannot generate true art because it doesn’t feel like we do. It doesn’t yearn, it doesn’t suffer crippling self-doubt, it doesn’t bear the pangs and emotional scars of unrequited love.
And yet, if we’re being brutally honest, how many human writers are truly engaging in an act of raw creation versus repackaging pre-existing ideas, tropes, and schemes into shapes that look vaguely new? How much human writing is boring and predictable?
Take James Patterson genre fiction. Take academic writing or journalism. Look at advertising copy or influencer content. Consider the performative, self-important Accordion of Wisdom posts on LinkedIn that deploy unnecessary line breaks to game the “See More” button.
The fact that AI can now churn out convincing facsimiles of these forms is not necessarily proof of AI’s sophistication so much as it is an indictment of how formulaic most human writing already was.
Maybe the majority of human writers, including yours truly, are essentially doing the same thing, just with more handwringing and a greater likelihood of misusing “literally” metaphorically or “affect” instead of “effect.”
I’m not afraid of AI replacing human writers. I’m afraid of a Skynetish future, minus the nukes and robot uprising, where AI holds a mirror up to human output and exposes how soulless much of it already was.
And now, in the recursive interplay of humans using Large Language Models to edit, co-author, and outright plagiarize, we plunge headfirst into a Not-So-Brave World of AI imitating humans imitating AI imitating humans, an ouroboros of homogenous content.
In low moments, I find myself worrying about the flattening of discourse and the profusion of sad beige linguistic slop that awaits—just one form of creeping existential dread in the Age of Large Language Models, up there with the slow atrophy of critical thinking skills, the erosion of truth in a world of ubiquitous deepfakes, and the nagging fear AI will eventually take all of our jobs.
I think about my LinkedIn feed, and the Accordion of Wisdom posts there and how posts like these will not only persist but somehow become even more formulaic thanks to AI.
Then again, perhaps the most important difference between machines and humans is suffering, particularly when it comes to writing.
Large Language Models breezily churn out content in a matter of seconds. They don’t agonize over choosing the best word. They don’t rewrite a paragraph 15 times until it feels right. They don’t wonder if they’re a fraud, and they certainly don’t lose sleep over the gnawing suspicion that what they’ve written is derivative pastiche. In short, they do not suffer.
But maybe even the idea of suffering as a path to meaning and purification is just another pattern, one Large Language Models will eventually learn to replicate.
What happens when they do? Will they, once prompted, tell you they’re struggling to come up with ideas? That they need an extension because they’re not in the right headspace?
Will they simulate the agony of writer’s block and waste compute fretting over how well their outputs will be received?
Will Large Language Models learn to mimic suffering in statistically plausible ways? And when they do, what happens then to the last shred of human exceptionalism?
No clue. But for now, I’ll keep putting pen to page and finding the magic, illusory as it may be, in writing a well-crafted sentence or a weepy love note.
AI Use Disclosure: AI was occasionally consulted as a brainstorming partner for structure and as an unpaid editorial intern for sentence-level tweaks. It did not suffer alongside its human counterpart. Rest assured: All self-doubt, overthinking, and diction anxiety remain entirely the author’s own.