paint-brush
Apocalypse of the Gapsby@woodrock
125 reads

Apocalypse of the Gaps

by Jesse WoodAugust 12th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AI doomerism is a religion, both millenarianism apocalypticism, to fill the god-sized gap in AI researchers. Only they can save the world from impending doom. It places them at the center of the universe, and provides a holy sense purpose. Another gap it fills is the explainability crisis of deep learning.
featured image - Apocalypse of the Gaps
Jesse Wood HackerNoon profile picture

AI doomerism is a religion, both millenarianism and apocalypticism, to fill the god-sized gap in AI researchers.

Many experts in the field believe it, but it doesn't make it true. A gnostic scientist can make astonishing scientific discoveries without ever proving their God real.


It gives them a messiah complex. Only they can save the world from impending doom. It places them at the center of the universe and provides a holy sense of purpose, a main quest where the fate of humanity hangs in the balance.

It has its holy scriptures, LessWrong, Orthogonality thesis, Super-intelligence. It has its prophets, Eliezer Yudkowsky, Nick Bostrom. Anyone who questions its dogma or narratives or asks for concrete examples or empirical evidence - is cast as a heretic, evil, or stupid for ignoring the existential threat of AGI or x-risk. "You're not engaging in object-level arguments," they parrot to dismiss any valid criticism, then refer you to their bible - a LessWrong post.


Another gap it fills is the explainability crisis of deep learning. Neural networks are an inscrutable series of matrix multiplications on floating points. We will never fully understand them, so we can't trust them. Out of uncertainty and rising speculation, God filled the gaps in our knowledge in the past, e.g., epilepsy was a sacred disease. Now, apocalypticism sci-fi fills the gap of explainability in deep learning.


Apocalypse of the Gaps!


We don't understand the full workings of the human brain; it remains a mystery, yet we have developed systems of governance and technology to trust other humans with our money, health, taxes, laws, data, and sensitive information.

We allow humans to wield nuclear warheads, biological weapons, drones, and armies. Yet, an intelligent AGI, with less agency than we allow our government, less agency than a fruit fly (as it is not embodied, can't replicate, and relies on external power), could end the world. We trust humans with WMDs over a sandboxed AGI without them.


There exist incredibly intelligent individuals who do not immediately pose an existential threat to humanity. There exist incredibly unintelligent individuals who pose a greater threat, e.g., Kim Jong Un in North Korea—a real-world, concrete, credible existential threat to humanity.


Intelligence ≠ Agency


Yet, lead AI Safety advocates suggest pre-emptive nuclear strikes on Russia to prevent them from building AGI for a theoretical x-risk. Risking imminent mutually assured destruction, the end of civilization, for some hypothetical future doomsday.

"America, Russia and half of Europe are now a desert of nuclear glass, but at least Skynet never came to be", the AI doomer pats himself on the back, after another hard day's work of unabashed speculation and prophesizing.

Drone strikes on data centers, pre-emptive nuclear strikes, compute governance. All authoritarian movements have ushered in their totalitarian control under the guise of public safety. This is no different.