paint-brush
Why AI Fails Can Be More Important Than Its Successesby@adrien-book

Why AI Fails Can Be More Important Than Its Successes

by Adrien BookSeptember 30th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Midjourney and ChatGPT are fun little toys, but, just like Excel before them, their end purpose is to boost companies’ bottom lines. Alarming revelations of AI’s dystopian pitfalls are now as ubiquitous as Twitter guys writing awful threads about prompt engineering a drop-shipping business. Modelling the AI hype into something bigger than what it is makes it seem inevitable.
featured image - Why AI Fails Can Be More Important Than Its Successes
Adrien Book HackerNoon profile picture

For the past few years, we’ve heard countless voices (yours truly included) share their worries about AI, and the future it will bring about. “It will take all the jobs”. “It will surveil us”. “It will kill us”. Alarming revelations of AI’s dystopian pitfalls, from biased algorithms to search-engine misinformation to c__elebrity NSFW deep fakes__ are now as ubiquitous as Twitter guys writing awful threads about prompt engineering a drop-shipping business (or something).


These discussions are fun, but not particularly productive. Or realistic, for that matter. What is happening with AI today has happened before.


Look at Excel. The green devil that haunts my nightmares did not render accountants obsolete. It made them better; and more productive. It also created a new class of capitalists dedicated to corporate optimization rather than bookkeeping, ushering in a world of untold wealth (and inequality).


Midjourney and ChatGPT are fun little toys, but, just like Excel before them, their end purpose is to boost companies’ bottom lines. Write emails faster. Take meeting minutes. Make dynamic pricing recommendations. Nothing more, despite their makers’ messianic promises.

Selling god has always been a smart commercial decision, hence Internet giants’ promotion of hyperbolic narratives (we also worship tech billionaires, but that’s another story). Modelling the AI hype into something bigger than what it is makes it seem inevitable. And one may as well buy into the inevitable.


But we are not getting a god: we are getting productivity tools.


This is why it is key that we witness how algorithms “fail” today. First, because we need to remember that our technocrat overlords are fallible, often in hilarious ways. They will want us to forget “eat at least one rock a day” (Google). “Your spouse and you don’t love each other” (Microsoft). The fake court citations (OpenAI). That lying Air Canada chatbot. The list goes on. We should not let them.

AIs failing is a representation of Big Tech very much not having its sh*t together, and that may be something to celebrate as they continue their seemingly endless march towards tech dominance (“tiny acts of rebellion”, or something).



The other reason is that… it’s simply fun and interesting to see AIs fail. It’s an interesting (dare I say artistic) look into what could be if companies weren’t so damn boring. It even feels a little forbidden, which adds to the fun.


We are actually in a unique age where we’re seeing things that will seem like oddities in the future. Eyeballs, stretched fingers, strange expressions, mouths laden with extra teeth… but also “jailbreaked” models, companion AIs, somewhat rebellious Welsh chatbots… The nature of creativity is messy, and we’re seeing that messiness in full display today.



The “better” AI gets in the eyes of its makers, the blander its output will become, and the more obvious and unacceptable its errors. We should enjoy AIs strange quirks and failures before companies erase and sanitize the space: we’ve already seen “peak AI”.


Firstly, because tech companies are failing to deliver long-promised returns after sinking billions of dollars into fancy email generators. Reality is back on the menu, the bubble is deflating, and the money spigot is about to run dry. The industry is likely to shrink to more niche applications as enthusiasm wanes. In a few years, all we’ll have is an extra fancy Excel/Outlook/calculator app. And what little joy we could glean from Big Tech stepping on rakes it set up for itself will be gone.


Secondly, because I believe that AI means more to us when it fails than when it succeeds. AI failures expose the technology’s limitations, reveal risks, and provide crucial insights for improvement. By highlighting what doesn’t work, failures drive innovation, encourage more responsible development, and help set realistic expectations for AI’s capabilities. Moreover… and I cannot stress this enough : it’s fun! And we need fun, now more than ever.


The world is very big, and we are very small. Good luck out there.