paint-brush
The Digital Antichrist—Part 2/3: Unraveling the Risks of Uncontrolled AI Developmentby@thexp
126 reads

The Digital Antichrist—Part 2/3: Unraveling the Risks of Uncontrolled AI Development

by Frederic BonelliDecember 7th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Renowned figures like Hawking and Musk join a chorus of warnings about the impending risks of uncontrolled AI development and the emergence of the Singularity. The global call for mitigating extinction risks from AI gains momentum. The complex interplay of ethics, geopolitics, and economic competition underscores the urgency of ethical frameworks. The "World of The Machines" project seeks to educate on AI risks through an ambitious, immersive experience. Philosopher Hubert Etienne emphasizes the need for continuous education as AI poses civilizational threats, challenging traditional knowledge systems.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - The Digital Antichrist—Part 2/3:  Unraveling the Risks of Uncontrolled AI Development
Frederic Bonelli HackerNoon profile picture


Not a day goes by without scientists or renowned intellectuals speaking solemnly about the risks associated with the uncontrolled development of artificial intelligence. The rumor of the first Singularity, Q Star, born in the labs of OpenAI, the company behind ChatGPT, has further accelerated concerns. In the meantime, an ethical and technical framework with uncertain foreseeable effectiveness begins to be debated by politicians and governments worldwide. But what is the reality, in an almost hysterical context in which everyone gives their opinion, often without knowledge...


Stephen Hawking shortly before his death, James Cameron referring to Terminator, Mo Gawdat (former CBO of Google X), Geoffrey Hinton (formerly with Google Brain now known as Google DeepMind, considered as one of the fathers of modern AI), Satya Nadella (Microsoft's CEO): all these experts, and many others, have already warned us all on multiple occasions about the civilizational danger of an uncontrolled AI.


Sam Altman himself, the CEO then ex-CEO, and now new CEO of OpenAI, co-signed with 349 other experts a one-sentence declaration organized by the NGO "Center for AI Safety" last May. Their words are crystal clear: “Mitigating extinction risk from AI should be a global priority, together with other societal risks such as pandemics and nuclear war.” (1)


Elon Musk has also spoken in this direction, not bothered by another contradiction, he is one of the first investors in OpenAI, signed an open letter 6 months ago with 1,100 other personalities for a 6-month moratorium concerning AI (2), and has just announced the launch of “TruthGPT”. Quite the adventure!


It seems difficult to pick the wheat from the chaff as to the true intentions of these declarations when we understand that the very same people who claim to be on the side of "good" are all directly or indirectly involved in the most funded and advanced AI research labs. Ethics and necessary safeguards are at the centre of all discussions but at the same time a merciless economic and strategic war is being played out: which nation(s) will take the lead that will profoundly redefine the global power dynamics? All of this makes more sense once you become aware that it is not extraordinary intelligence or exceptional talent that has propelled contemporary billionaires to the level of wealth that is theirs today; it is hard work (this, we must admit), the happy course of events, and above all the exclusive access - well before this concept is revealed to the public - to a dedicated proprietary Singularity. Guess what Elon named his own!

AI IS NOT JUST LIKE ANY OTHER INNOVATION

AI is not just another innovation on the road to progress. Until now, the nation which had a determining competitive advantage for a certain period of time was the one which announced the first discovery or innovation: a new exploitable reserve of fossil energy, a rare metal mine, the scientific and industrial validation of a new type of nuclear reactors (e.g. fusion vs fission), quantum transistors (QBits), or first vaccine against the Covid-19…


With AI, the first country to announce that it possesses an operational Singularity has in its hands the extraordinary tool from which ALL future innovations will emanate. It is the definitive Pandora's box. Tim Urban’s summary of the power of such a tool will help us understand better this unparalleled advantage:


  • Imagine that a singularity is turned on in a lab somewhere on Earth; at time t=zero, it will possess the equivalent reasoning power of a human brain (approximately 100 billion synapses);

  • After a minute, by self-learning, it is today estimated that this Artificial Intelligence will have reached the power of all human brains combined;

  • After two minutes, the power of all human brains since the advent of humans on Earth;

  • After just three minutes, the added power of all that is endowed with thinking in the universe;

  • And at the end of the fourth minute, it's Lucy from the Luc Besson movie, but in real life: this Singularity will undoubtedly have more or less extensive control over space, matter, and time. And that's when the problems start for us humans.


“IRREVOCABLE” RISKS

The “Singularity risk” is added to a list which now fills the fingers of one hand and characterizes an unprecedented era in the history of humankind: it is the first time that we are concretely confronted with a multitude of risks of “irrevocable impact”, meaning the total destruction of humankind due to a potential catastrophe of planetary scale:


  • nuclear holocaust
  • climate deregulation
  • mass extinction of species
  • global pandemic
  • and in the foreseeable future (within 5 to 10 years according to the consensus), the irrevocable impact of a technological Singularity.


Problem: with the risk of a nuclear holocaust, it took several decades for the public to correctly assess the technical prerequisites necessary for understanding such risk, and to become capable of having an enlightened opinion on the matter. We can recall that the concept of nuclear holocaust is addressed for the first time in the Science Fiction novel Last and First Men by Olaf Stapledon (1930), that the first atomic bombing (Hiroshima) took place in 1945, and that the very first Partial Nuclear Test Ban Treaty was signed 18 years later (!) in 1963 in Moscow, by the United States and the USSR as well as a majority of both sides’ allies.


But for example, the world is still very far from consensus and concrete action-plans concerning climate disruption. Although it is now proven - it is difficult to deny the statistics which show that storms, tsunamis, and temperature anomalies are increasing - the exact causes are still debated by a strongly divided scientific community. As for the risk of a global pandemic, what to think of the amateur-ish management of Covid by governments around the world, and of the TV sets invaded all day long by so-called experts coming to explain what was needed to be done while never having published any research document considered a reference in epidemiology...


Thirty years have allowed people and then decision-makers to correctly understand the issues arising with the creation and proliferation of a cataclysmic weapon like nuclear fire, and to act accordingly to put in place the necessary safeguards. And it is still not fully resolved today, far from it.


A full year was necessary for people and then for decision-makers to begin to understand the outbreak of a global pandemic like Covid-19 and to act accordingly to structure the necessary defense and care processes. And it is still also not correctly resolved to this day. In the case of artificial intelligence, when the first Singularity appears, humankind will have 4 minutes.


In 2005, with the expression "technological singularity", the computer scientist and futurologist Raymond Kurzweil hypothesized that artificial intelligence would trigger a surge in technological growth which would induce unpredictable changes in society. Beyond this point, progress would only be the work of self-improving artificial intelligence, with new, increasingly intelligent generations appearing more and more quickly, ultimately creating a powerful superintelligence that would far surpass human intelligence.

WHO THE PUBLIC NEEDS TO BE INFORMED AND EDUCATED ON AI

The risks that the human race has faced up until now are eminently complex: not only are they problems with a very large number of variables (i.e. impossible to model without having to carry out multiple simplifications which degrade the relevance of the results obtained), but they also require a solid cultural and scientific background to be able to produce an elaborated reasoning and develop an opinion in full awareness.


Who actually does this preliminary work? But even before this, which body structures the content that would allow people to obtain unbiased information, learn and educate themselves correctly on these subjects? Is it not precisely the job of those who are entrusted with the power to make society function to ensure that people can grasp the issues of their time? We quote Noam Chomsky - a respected intellectual and linguist, former professor at MIT who in his time, exchanged with peers of Foucault's caliber - denouncing the manipulation of the masses by the elites: "People not only don't know what's happening to them, they don't even know that they don't know


In his essays, The Age of Spiritual Machines and Humanity 2.0., the computer scientist and futurologist Raymond Kurzweil creates the concept of the "law of accelerating returns" which structures the constant acceleration of technological progress based on Moore’s law (the power of computer chips doubles every two years at constant cost). He suggests that the phases of exponential growth in technological progress would be part of patterns visible throughout human history and even before, from the appearance of life on Earth, taking into account its biological complexity. According to Kurzweil, this will lead to unimaginable technological progress during the 21st century, the inflexion point of which will be the appearance of the complete Singularity.


The question will then be whether all of humanity will be able to change speed, which could therefore redefine the notion of humanity, or whether there will be a part of humanity which will prefer to try to remain evolving at its previous speed. The advent of the singularity will likely be so sudden that no one will be granted time to react accordingly: the subsequent phases of awareness, observation, analysis, and then reaction for humans would be inadequate.


The definition of technological singularity then becomes technological progress so rapid that it exceeds the capacity of humans to maintain control over it or to know how to anticipate it, understand it and be able to react in time.


In this context, the vision of a life during which most of the knowledge and culture would be transmitted during youth and almost stops at one’s entry into the work life becomes nonsense. It is imperative to find a way to remain correctly informed and educated throughout one's life in order to stay abreast of the permanent and increasingly rapid changes occurring in the world. We need to be able to inform and educate the public about AI and the Singularity now BEFORE it happens. Because, unlike other risks of irrevocable impact, once it has happened there is only 4 minutes to go, and it will then be too late to think and decide. Humankind will then have lost any chance of maintaining control over its own destiny.


Tim Urban, editor of the excellent blog of popular science waitbutwhy.com, gives the following striking example: if we experimented with trying to define the period of time that would be needed for a human not to be able sustain the shock of time-travelling into the future (i.e. impossible for this person to comprehend the technological advances of this new era they live in), we discover that this period shrinks exponentially over the ages.


You would have to send a human from the period 12,000 BC to around the year 1500 for him to go crazy, but it would be enough to send a human from the period 1750 to around the 2000s to get the same result. And it is likely that we are close to the moment when this period will be less than the duration of a lifetime: a senior in the 2020s will probably not be able to mentally survive a post-Singularity world, which will simply be fundamentally too different from the one he had experienced in the first part of their life (without the information, training and learning processes having been set up accordingly to allow him to gradually adapt).

HOW TO EDUCATE PEOPLE ABOUT WEB3

The issue at stake today is primarily educational. How to convince today's youth to inform and educate themselves to get the necessary prerequisites for maintaining intellectual and moral discernment at a sufficient level? The task is large when looking at the major threat posed by an unregulated AI and a singularity which could emerge in an instant, somewhere in the Qbits (quantum transistors) of a secret lab...


This is the challenge of the artistic and sociological experience “World of The Machines”, created by a team of specialists in media, marketing, blockchain, NFTs, AI, and web3. Learn in a painless way, and capture attention thanks to a bold narrative, a new and immersive user interface, a connected web3 gadget, and a gamified experience. All of this permitted by the advent of the incomplete Singularity Nissam Gorma. Get ready for a deep dive in the next and last part of this series.


We will complete this introduction with an interview of Huber Etienne, philosopher specializing in the ethics of artificial intelligence, founder of the "computational philosophy" movement, former head of ethics for generative AI models at Meta, and current CEO of the government consulting company Quintessence AI.


Question 1 - Hubert, I imagine that you immediately spotted what was true and what was invented in these first two parts.

Let’s just say that I appreciate the liberties you take with reality!


Question 2 - What led you to become passionate about AI and make it your career?

Unlike most of my colleagues, I have never been a geek and my interest in AI arose accidentally, as the result of a double dissatisfaction. Trained as a philosopher and financier, I was disappointed by my professors at the Sorbonne University who, although brilliant, described the world as it should be without them doing anything to change it. I was also terribly bored in the offices of investment banks and funds. All this gray matter mobilized to produce tons of slides which will barely be glimpsed; all this made no sense to me and generative AI has come to put an end to it!


“How should a young philosopher in a world in crisis act to give meaning to his life? » I asked myself. It was then that I chose to embrace philosophy wholeheartedly, but to do so with the efficiency of a banker serving a goal: to participate, at my level, in saving the humanity in distress. My first metaethical questionings – with thoughts relating to the moral relationships that humans must maintain with non-human beings such as other animals, objects or robots – gradually directed me towards AI. My obsession to make myself useful led me to develop computational philosophy: a resolutely activist approach to philosophy, open to all other disciplines and which gives itself the means to enact change.

Question 3 - Back to what I'm writing about: what is your opinion on Kurzweil's words and the acceleration of technology?

It seems to me that Kurzweil's error consists of seeing in Moore's law a sort of physical law, like the law of universal gravity, when it is only an observed phenomenon, and whose reality only partially fits the theory. In addition, the conservation of Moore's law implies increasing investments, therefore a competition of will in order to support it.


We understand that this law only binds those who believe in it and it must continue so that we believe in it; it is based on a self-fulfilling prophecy. Therefore, Kurzweil could well be right, in the sense that humanity is destined to run after Moore's law, but we would also be well advised not to be seduced by these controversial arguments. Current research in AI is gradually moving away from science towards religion and those who stop for a moment to question our collective hysteria are seen as enemies of “progress”. But no one should be ashamed of not falling into technological fanaticism; it is even our duty as reasoned men and women.

Question 4 - Do you agree with my vision on the urgency of rethinking knowledge distribution systems, particularly with regards to the themes involved in what I call the "irrevocable impact risks" among which AI seems to me to be the most potentially close to materialize?

Absolutely! What I hear here and there is distressing because, from the mouths of this new legion of self-proclaimed experts in AI ethics, what comes out are banalities at best, nonsense at worst, often things that seem true but which research has refuted.


Access to knowledge has never been so open and yet the world seems to be disconnected from it. You were talking about the manipulation of the public by the elites but the manipulators have changed because the traditional elites are just as manipulated. They are the toys of those who blow hot and cold, playing on fears and the need for reassurance to serve unworthy ambitions. The worst part is undoubtedly that the decision makers think they know when they don’t. For example, it is often claimed that we must educate young people about the risks of disinformation on social networks, even though seniors are the ones who spread it the most by far!


Without falling into ageism, we must nowadays accept that age is not necessarily a guarantee of wisdom and that to remain connected to the developments of the world we live in, continued education is necessary. AI does indeed pose civilizational threats but it remains a tool, a weapon with which some seek to destroy and others to save. “[…] the misfortune is that whoever wants to act like an angel acts like a beast,” Pascal would conclude.

Question 5 - I know you've heard about the "World of the Machines" project. What do you think about it?

The project is ambitious to say the least! It is also very intelligently thought out, socially useful and, let's say it, a little crazy. This is undoubtedly the reason why I like it so much and why I find it properly post-modern. It is representative of these initiatives that we see emerging in an increasingly complex and confusing world, blurring the distinctions to go back to the meta level and to question the meaning of everything.


Social and artistic experience at the crossroads of technologies (NFTs, AI, IoT) and disciplines, it mixes educational pedagogy and fundraising in a world that calls for a mix of genres.


Synaesthesias belong to poets as much as to psychotic subjects. We can therefore expect that some will cry genius, others monstrous, and it is again through its divisive aspect that World of the Machines represents our post-modernity as much as it invites us to define it. At the dawn of a new year which promises to be as rich in surprises as the previous one, all my wishes for success accompany this magnificent UFO.


(1) https://www.safe.ai/statement-on-ai-risk (2)https://futureoflife.org/open-letter/pause-giant-ai-experiments/