"High tech runs three-times faster than normal businesses, and the government runs three-times slower than normal businesses. So, we have a nine-times gap"
- Former CEO of Intel, Andrew Grove
“Some of us continue to have old-fashioned careers in the twenty-first century— we are doctors, professors, lawyers, and truck drivers. Yet the main economy is now driven not by what we do, but by the information extracted from us, not by our labor in any established sense, but by our data.”
-Justin E. H. Smith in The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning (2022)
“In the past, humans had to struggle against exploitation. In the 21st century, the really big struggle will be against irrelevance. It is much worse to be irrelevant than to be exploited. Those who fail in the struggle against irrelevance will constitute a new useless class.”
-Yuval Noah Harari in a talk at World Economic Forum, Davos (2020)
“Tackling these problems will require a combination of clear-eyed analysis and profound philosophical examination of what matters in our lives, a task for both our minds and our hearts”
- Kai-Lee Fu in AI Super-Powers: China, Silicon Valley, and the New World Order (2018)
“If we believe that life has a meaning beyond this material rat race, then AI just might be the tool that can help us uncover that deeper meaning.”
-Kai-Lee Fu in AI Super-Powers: China, Silicon Valley, and the New World Order (2018)
I imagine the quotes above as inner voices repeating themselves in a continuous loop to kick off this post which has been in the making for a while.
In fact, this is not a single post, but the first one in a series of posts that I plan to write over a longer time frame. I am interested in exploring how the AI revolution, and the Web 3.0 paradigm, including the peculiar Metaverse concept, will shape the human experience in the future.
Without going too deep down the rabbit hole, and without further ado, here is a brief overlook of the challenges that I plan to write about.
Yuval Noah Harari's talk at the World Economic Forum in Davos, 2020 is an excellent jumping-off point to convey the macro threat of advanced AI systems.
Harari identifies three challenges that will come from the AI revolution: a new useless class, data colonialism, and digital dictatorships.
A new useless class is intertwined with the concept of technological unemployment that I will get more into later. Briefly explained, “the useless class” is an envisioned, new caste for employees who fall behind the technological development as they witness how their hard-earned skills are replaced by automation. '
Data colonialism relates to “technological inequality”. For example, the divide between some developing countries that are mostly “off the digital grid” compared to the AI superpowers US and China. Automation in highly developed countries can replace low-wage labor in developing countries. Besides, data is the oil of the 21st Century. Whoever has the data can control and exploit people with knowledge about how they behave, act, think, and feel. As Harari says in his talk, “when you have enough data, you don't need soldiers to invade a country.”
Harari states digital dictatorships in a simple, dark equation that he believes will be the defining equation of life in the 21st Century:
“B x C x D = AHH”
“Biological knowledge (B) multiplied by computing power (C) multiplied by data (D) equals the ability to hack humans (AHH)”
According to Harari, governments and corporations with access to information about our personality types, political views, religious beliefs, sexual preferences, likes and dislikes, weaknesses and strengths, deepest fears and desires, are able to monitor everyone and predict and manipulate our behavior. Essentially, they have the “ability to hack humans”.
In Harari’s words, if we are not careful, we could create the worst totalitarian regime in history with the biological knowledge, computing power, and data about citizens that tyrants of the past lacked.
On a smaller scale, we increasingly delegate decision-making powers and authority to algorithms and algorithmic recommendation systems. As Harari also points out in his talk:
“Billions of people trust the Facebook algorithm to tell us what is new, the Google algorithms tell us what is true, Netflix tells us what to watch, and the Amazon and Ali Baba algorithms tell us what to buy.”
Algorithmic systems even decide if we are suited for a job, if we are eligible for a loan, how our money should be invested, or who we should date by matching us with potential partners on Tinder. Meanwhile, we have no insights into how these decisions are made, and even if we had, we wouldn’t be able to understand the machine's reasoning.
I have made some lengthy, legalistic posts on GDPR’s regulation of automated decision-making here and here. More or less, I conclude in the words of Edward Snowden that GDPR is a paper tiger as it offers no meaningful protection for individuals against automated decisions. The right for data subjects to obtain human intervention that follows from Article 22 of GDPR is a noble initiative with limited effect. Even if the decision is necessary for entering into a contract or is based on the individual’s explicit consent, there is often no way of understanding or explaining the processes behind how a specific decision was reached, neither for the individual, the business owner, or the developers of an advanced AI system.
That being said, the EU Commission has proposed a legal framework for regulating AI with a risk-based approach that aims to fill the regulatory holes and mitigate the dangers of AI systems in a delicate balancing act with the business interests of namely small and medium-sized enterprises (SMEs).
“The paper tiger” GDPR has also shown its teeth in 2021 with hefty fines. The Data Protection Commission of Luxembourg imposed a record fine on Amazon of €746 Million reportedly related to Amazon’s use of customer data for targeted advertising. The Irish Data Protection fined WhatsApp €225 Million for unclear privacy policies and a lack of transparency in how it handles user data. Google was fined €150 Million and Facebook Ireland Limited was fined €90 Million by the French data authority CNIL for failing to provide a simple method for users to opt-out of tracking cookies.
Exponential technological growth has done wonders for humanity over the last century, raising living standards, comfort, and convenience. And the growth continues unhinged, perhaps even outpacing Moore’s law.
Moore’s law is an economic principle named after Intel co-founder Gordon E. Moore who predicted in 1965 that the number of transistors in a computer chip would roughly double every year for a ten-year period. He was right. In 1975, Moore revised his prediction and suggested that the number of transistors would roughly double every two years. But he was too pessimistic. The number of transistors would double approximately every 18 months for 50 years from 1961. As the number of transistors in a computer chip increases, so does the computer processing power, while the price per transistor falls.
In 2011, physicist and futurist Michio Kaku described the effects of Moore’s law:
“Today, your cell phone has more computer power than all of NASA back in 1969, when it placed two astronauts on the moon. Video games, which consume enormous amounts of computer power to simulate 3-D situations, use more computer power than mainframe computers of the previous decade. The Sony PlayStation of today, which costs $300, has the power of a military supercomputer of 1997, which cost millions of dollars.”
Exponential growth of technology has of course not been limited to computer chips. Today, text-to-image models such as OpenAI’s Dall-E 2 or Google’s Imagen can generate artwork and hyper-realistic photos from prompts. Language models can write any form of content nearly indistinguishable from human writing. We have staffless stores like Amazon Go, we can build driverless cars, and acquire software to support any kind of business function.
Within decades AI systems can likely outcompete humans in any intellectual field. Including in tasks that were previously considered safe ground such as researching and discovering, philosophizing, judicial reviewing, diagnosing, and financial analyzing. How severe will the impacts be on the job market?
A pair of Oxford University researchers made an early prediction in 2013 that 47% of the total US employment was at high risk of automation within a decade or two.
Another report from the Organization for Economic Cooperation and Development (OECD) came with a low-end estimate in 2016 that only 9 % of jobs were automatable on average across 21 OECD countries.
A PricewaterhouseCoopers (PwC) report from 2017 found that around 30% of jobs in the UK and around 38% in the US were at high risk of automation by the early 2030s.
World-leading AI experts Kai Fu-Lee predicted in 2018 that we would be technically capable to automate 40-50% of jobs in the United States within ten to twenty years.
How should governments around the world respond? And how can we maintain a sense of purpose when job functions that previously gave us an identity are no longer needed? We are dealing with an entirely new beast.
The name of the game in capitalism has always been that the richer become richer, while the poorer becomes poorer. Unfortunately, techno-capitalism, which is capitalism boosted by AI systems and other modern technologies, is widening the gap further between rich and poor. For example, American billionaires gained $2 trillion during the corona pandemic. At the same time, parents in Afghanistan are selling their kidneys to feed their starving children in the current food crisis. Such facts remain completely invisible to us, despite of all the amazing improvements that the internet has brought to our daily lives.
The internet of today seems more and more like a gigantic gaslighting project designed to distract us with attention-grabbing and fake content generated partly or in whole by algorithms. Jonathan Haidt famously compares the impact of social media with the biblical myth of Babel's tower. According to the myth, descendants of Noah were setting out to build the Tower of Babel “with its top in the heavens”, until God disrupted their plans by confusing their language so they were no longer able to communicate. Similarly, Haidt theorizes that polarization and echo chambers on social media obfuscate our communication so we are no longer able to understand each other on the most basic level.
To avoid Big Tech’s aggressive business models which are based on data collection, attention-capturing, and targeted ad campaigns, new web services based on blockchain technology are accessible to anyone already now. They are centered around creators, instead of intermediaries, and allow users to own, and even earn revenue, from their data.
The Web 3.0 movement is inspired and founded on the philosophy behind Bitcoin. The Bitcoin system allows users to store and exchange value on the internet outside of the control of banks and governments. According to Balaji Srinivasan in his book The Network State, the same system could be used to even disrupt geography by forming new countries.
When I study the works of futurists and AI experts, I can't help but think that policymakers and regulators are preparing for a technological tsunami with raincoats and umbrellas. Key public decision-makers should not be oblivious to the scope and scale of the technological disruption which is happening right now above our heads.
In my view, radically new ideas should be listened to, and people who think outside of outdated boxes and protocols should be welcomed in order to regulate the current state of affairs and stand up to the challenges and opportunities of the post-internet era.
If you are still with me, and interested in deeper, philosophical and legal contributions, sign up to my Substack. I am also keenly interested to connect with any writer or entrepreneur who shares my perspective.