Who are all those people on social media?
Those people who randomly like, share, comment on posts, harass and praise other people and events.
The anonymous users who influence public opinion, dictate the social discourse, and even have the power to decide the outcome of elections.
They could have Ph.D.’s or be children. Or their personas could be entirely fabricated by AI. It can be concluded that they are unidentifiable.
Here is a spooky thought experiment for you.
Imagine that you are texting back and forth with someone or reading an article online. Naturally, you’d assume that there is a human, made of flesh and bones at the other end.
In actuality, however, the communication or content you receive is solely generated by an AI. Just like the fake profile photo. How could you know?
To take the thought experiment one step further: How can you prove that friends you have in your network on social media are really who they appear to be?
When you meet them, their faces serve as proof of identity. But in the online space, there is hardly any direct way for users to verify if a profile is in fact linked to the person it represents.
Even Instagram stories, TikTok videos, or the most heartfelt Facebook postings can in theory be deepfaked. Essentially, information online cannot be trusted.
Sure, right now this dilemma is more philosophical than pragmatic.
However, technologies that can distort reality in online spaces are getting stronger by the day. Just imagine what they can do in 10 or 50 years.
Without a bulletproof method for users to confirm people’s identity, the internet may run into a situation that resonates with “dead internet theory” — an underground, conspiracy theory that proposes the internet is “fake”, basically a gaslighting project that is almost entirely controlled by AI.[1]
Bizarre as it sounds, I do believe there is a disturbing ring of truth to it. In this post, I will go over what the dead internet theory is, and how a platform-agnostic identify system built on the “trustless” web 3.0 could resolve the current problem of digital identities.
“Dead internet theory” had a breakthrough into mainstream media when a widely read post on the forum Agora Road Macintosh Cafe caught the attention of The Atlantic writer Kaitlynn Tiffany.
The post was written by a forum member called “IlluminatiPirate” who boasts about being an “oldfag” (meaning an old-time member) of imageboards such as 4chan who has “seen it all”. In the midst of his seemingly paranoid ramblings, there are some nuggets of truth that many people can relate to.
The pseudonymous forum member compiled evidence to the theory which was according to him originally written by anonymous users on 4chan’s paranormal section and Wizchan. The latter is an imageboard dedicated to male virgins past the age of 30 or “wizards” as they are called in the community.
As The Atlantic writer Kaitlyn Tiffany aptly wrote, dead internet is “patently ridiculous”, yet it “feels true”.[2]
The TLDR-version (meaning the abstract) of the theory is perhaps a testament to that:[3]
“Large proportions of the supposedly human-produced content on the internet are actually generated by artificial intelligence networks in conjunction with paid secret media influencers in order to manufacture consumers for an increasing range of newly-normalised cultural products.”
Or when IlluminatPirate describes a strange hunch he has that something is very wrong about the internet:[4]
“The Internet feels empty and devoid of people. It is also devoid of content. Compared to the Internet of say 2007 (and beyond) the Internet of today is entirely sterile.
There is nowhere to go and nothing to do, see, read or experience anymore. It all imploded into a handful of normalfag[5] sites and these empty husks we inhabit. Yes, the Internet may seem gigantic, but it’s like a hot air balloon with nothing inside.”
The main premise of the theory I believe is somewhat accurate. A report shows that bots make up almost two-thirds of all internet traffic anno 2021.[6]
Additionally, social media algorithms curate and prioritize the content it shows to users based on the principle of profit-maximization. In other words, social media algorithms and search engines push content that entices users to buy more products and services, and neglect content that is controversial.
Additionally, the unfathomable amount of data processed on the internet every minute, kind of makes it all blend together. The human mind was never designed to take in, or even less sort out, in such quantities of information.
In the most proficient imageboard slang, the author goes on to describe why dead internet theory is real and why the internet isn’t.
Some of his points are hard to follow, passages are incoherent, and at times he is clearly delusional. However, some of the arguments coincide with my own research and personal beliefs.
IlluminatPirate points to the superficial and impersonal nature of online interactions:
“I used to be in perpetual contact with a solid number of people across multiple sites. Across the years each and every one of them vanished without a trace.”
He points out how the same content keeps reappearing over and over again:
“I’ve seen the same threads, the same pics and the same replies reposted over and over across the years to the point of me seeing it as unremarkable.”
I also share his belief that truly original content is suffering because large players in the entertainment industry rely on algorithms and big data analysis of consumer habits to “feed the customers”. That is often the feeling I have when I watch a new movie from Netflix Originals:
“Algorithm fiction. Do you like capeshit, Anon? How about other Hollywood stuff? Music perhaps? Have you noticed how sterile fiction has become?
How it caters to the lowest common denominator and follows the same template over and over again? How music is just autotunes and basic blandness? The writer’s strike never ended.
Algorithms and computer programs are manufacturing modern fiction. No human being is behind these things. This is why anime looms so large — even a simple moe anime has heart because there’s actual people behind it, and we all intuitively feel this.”
IlluminatiPirate also points to the dangers of deepfakes. As implied at the beginning of this post, the technology could potentially take off within the next couple of years:
“Fake people. No, not NPC’s. Youtube people who talk about this or that, and quite possibly many politicians, actors and so forth may not actually exist.”
Social media algorithms and search engines alike are known to customize news feeds and search results so it fits with the individual user’s likes and preferences. This customization feature results in so-called “epistemic bubbles” where users are constantly reaffirmed in their own beliefs and opinions.
Everything the users see confirms what they already know, and people they interact with tend to wholeheartedly agree with them. People are thus exposed to a very one-sided view of the world. It creates a potentially dangerous “us vs. them”, “they are wrong, we are right”-dynamic in society.
“The internet is a fast way to get info, and info is what moves the mind, and the thing is, the mind likes recognition. When the “likes” were introduced without negative feedback they created a copy-feedback subconscious, they made it so only “positive” opinions be propagated (also accepted), and in it’s way negative opinions to be obsolete.
Now everyone is too cowardly to have an opinion so they copy others they like, they are more likely to follow trends and say what others said, you can also see it with the paranoia of always wanting to listen to experts.”
Finally, I think one of the most interesting points to be drawn from the compilation post on dead internet theory, is the distinction between “the old web” and “the new web”. If cut to the bone, IlluminatiPirate’s basic point is that he misses the old days of the internet, and dreads what the internet has become. The same notion is shared even by the founder of the world wide web, Tim Bernes-Lee, who says that the system is failing.[7]
“Creation of original content is how the internet used to work. Anonymous people were willing to express their opinions and try radical or experimental things. More truly original content, uninfluenced by bots or paid influencers, was created due to anonymity as protection against negative feedback. On the old internet, you could start anew every time you posted something.”
There is indeed a conventionally accepted distinction between an “old” internet (web 1.0) and a new internet (web 2.0). The first version of the internet was read-only, while the second version allow us to upload our everyday lives to various platforms.[8] The problem with web 2.0 is that four companies control 67% of the world’s cloud infrastructure.[9] The large tech monopolies claim ownership of our data, and indeed of our digital identities.
With all this in mind — how can we prove that the internet is not fake? Or to turn the question around: how can we make it real?
Satoshi Nakamoto, the anonymous creator of Bitcoin, had an extraordinarily deep understanding of human nature. He understood that transactions between humans are based on trust.
Within good reason, trusting others is usually considered to be a positive trait. However, human beings are fallible. Therefore, Satoshi Nakamoto founded Bitcoin, a monetary system based on cryptographic proof instead of trust.[10]
The transactions and interactions we have with other people on the web should be based on Bitcoin’s principle of “trustlessness”. What we need is a reliable and transparent method to link people’s digital avatars to the people they are supposed to represent in real life.
That would mitigate a wide array of issues such as fake spam accounts, the deliberate spread of false information, deceivable media manipulation, fraud, catfishing, cyberbullying, and trolling because people can be held directly accountable for their actions in the online space.
Importantly, online identity systems have to be (a) open and (b) decentralized. The contemporary solution where you upload your personal ID to the service provider for verification does not fulfill either of those criteria.
If the identity system is closed and centralized, the system owner would be able to track and monitor people’s behavior much like China’s social credit system. Therefore, online identity systems have to be built on the infrastructure of web 3.0.
The key characteristic of web 3.0 is that users are in charge of their data and digital identity. Plenty of companies are already working towards this shift by building platform-agnostic identity solutions for the web that can be scaled.
Notably, Microsoft recognizes the importance of being able to prove your online identity in a secure way that helps you to keep your credentials and information private.[11] They currently offer a decentralized, identity system called Azure AD Verifiable Credentials for organizations.
Examples of other companies that work on decentralized, blockchain-based identity systems are ONTology, XSL Labs, Cheqd, Sovrin, and Decentralized Identity Foundation.
When digital identity solutions such as these become widely used, authorization with username and password or with social identity via accounts on Google, Facebook, or Twitter will be ancient history. Instead, you will own a unique digital passport that can be authorized by a peer-to-peer network in the place of an intermediary every time you register for a new online service.
You can also decide what pieces of data to make available, to whom, and under what conditions.[12] For example, if an application is age-restricted, you can prove with your digital passport that you are above a certain age without providing any further information such as your birthday. Or if an agency is interested in using your consumer preferences for commercial or statistical purposes you can set a price for giving them access to your data.[13]
That is only fair. Everything we do online leaves a digital footprint. These digital footprints are picked up by monopolistic technology companies and used as lubrication for algorithms and AI. Although they have no rightful claim to extract our subtle pieces of information for free.
If nothing changes, the dystopian vision of dead internet theory could eventually become true.
The internet may become a twilight zone controlled by powerful, proprietary-owned AI that will continue to make great profits from our personalities, beliefs, and human qualities.
Luckily, the explosion in blockchain innovation and the hype surrounding web 3.0 are raising awareness of the digital identity problem and the current lack of data ownership for internet users.
[1] Kaitlyn Tiffany (Aug 31, 2021), Maybe You Missed It, but the Internet “Died” Five Years Ago -> https://www.theatlantic.com/technology/archive/2021/08/dead-internet-theory-wrong-but-feels-true/619937/ (13–02–2022).
[2] Kaitlyn Tiffany (2021), “Maybe You Missed It, but the Internet ‘Died’ Five Years Ago” -> https://www.theatlantic.com/technology/archive/2021/08/dead-internet-theory-wrong-but-feels-true/619937/.
[3] https://forum.agoraroad.com/index.php?threads/dead-internet-theory-most-of-the-internet-is-fake.3011/ (18–02–2022).
[4] Ibid.
[5] Normalfag: slang for average internet use
[6] Barracuda (Sep 2021), Bot Attacks: Top Threats and Trends -> https://assets.barracuda.com/assets/docs/dms/Bot_Attacks_report_vol1_EN.pdf
[7] https://www.theguardian.com/technology/2017/nov/15/tim-berners-lee-world-wide-web-net-neutrality (12–02–2021).
[8] Emily Nicolle (12–02–2022), Andreessen’s Dixon Spies Riches in Web3. Others See ‘Rubbish’ -> https://www.bloomberg.com/news/articles/2022-02-12/andreessen-horowitz-a16z-s-chris-dixon-sees-web3-riches-others-see-rubbish?srnd=cryptocurrencies.
[9] https://uk.pcmag.com/old-cloud-infrastructure/131713/four-companies-control-67-of-the-worlds-cloud-infrastructure (12–02–2022).
[10] See https://i.redd.it/njml2ng9j7h81.png (18–02–2022)-
[11] Quote from Azure Friday (16. August 2019), An introduction to https://docs.microsoft.com/da-dk/shows/Azure-Friday/An-introduction-to-decentralized-identities
[12] George Zarkadakis, The Internet Is Dead: Long Live the Internet, pg. 49. From Werthner et. al Perspective on Digital Humanism (2022), Springer.
[13] Ibid.
Co-published here.