It's quite possible that the threats allegedly posed by AI are nothing more than campfire scaremongering (C) Illustration created with Bing.
What have we learned about ourselves thanks to Artificial Intelligence? What can we learn about ourselves through interaction with Artificial Intelligence? Let's try to find, if not the answers to these questions, at least the directions in which to look for them.
Do not hurry to turn on the snobbery and accuse the authors of an inaccurate statement of the question. We know that true AI has not yet been created, and the widespread use of the term artificial intelligence in relation to chatbots based on Large Linguistic Models (LLM) is nothing more than a commercial moniker. By the way, here is an interesting test: Would you bet... say half of all you own, that our contention above is true? We sure wouldn't. Nor would we bet that a real AI will be exactly created and not, say, self-generated from "ingredients" that are already available. After all, the only intelligence known to us so far, according to the majority of biologists, was self-generated.
Yes, we are aware that the religious have other opinions on this. But if we get away from the position taken by science, we will surely soar to the heights of philosophy, from which the topic of this article will seem utterly insignificant. So, it is better to stay with our two feet planted on solid ground.
So, let us return to our starting point. Even the most superficial study of the issue shows that despite the fact that there is no real AI yet, it is already interacting with us, at least as a mirror or a lantern illuminating the dark corners of the closet in our soul. By soul, we mean the whole complex of our mind, psyche, consciousness, and subconsciousness.
The main thing we've learned, thanks to AI, is that humans have turned into terrible cowards. In fact, we have never been cowards like we are now. Not even back in the days when we could counter all of the world’s threats with simple stone spears and axes. And that's surprising because, generally speaking, our species is characterized by curiosity over fear. Otherwise, we would not have departed the African Savanna and scattered across the planet many thousands of years ago.
We will not engage here in pathos, enumerating the risky ventures and great accomplishments of our species, such as Leif Ericsson's voyage or the moon landing. Let's just say that throughout history, humans have implemented progressive innovations quickly and without much thought about the risks, even if these innovations involved risk.
Let's take one example - bronze. The oldest bronze alloys contained arsenic, the vapors from which had a negative effect on the health of the masters who made metal products. Do you think that our distant ancestors were not aware of this? They were. It is no coincidence that Hephaestus, the divine blacksmith and patron of craftsmen in ancient Greece, was born with deformed legs and was sickly and frail. Observant Greeks perfectly saw what problems, including hereditary issues, faced foundry workers and blacksmiths working with bronze. But this knowledge did not stop progress or return people to the Stone Age. Humanity just invented other, safer alloys.
Throughout history, much of humanity’s progress looked the same as the development of bronze. The invention of technology - the use of technology along with gathering information about its risks and pitfalls gained from practical experience - moves in parallel with the improvement of technology and the development of better technology. This was the case with sailing ships and windmills, steamships and steam locomotives, automobiles and airplanes, nuclear power plants, and spacecraft. But with AI, we suddenly broke from this pattern. Simply put, we chickened out. Without even seriously approaching the creation of true AI, we have already started to be afraid of it. Already, a huge number of fantastic works of fiction have been created in the “AI attacks humanity” genre. Among them are many real masterpieces which have almost turned the idea of an inevitable war between people and intelligent machines into an axiom.
Once again, consider this point. Full-fledged AI has not yet become a reality, but we are already afraid of it! We've scared ourselves! We have infected ourselves with Frankenstein syndrome, turning it into a kind of epidemic. It looks especially strange that real gurus of technical progress, who know what risk is and know how to take it, often oppose the development of AI. Even "sober heads" who do not believe that AI poses a purely physical threat to humanity talk about the economic and psychological threats that may affect humanity. What if smart machines and AI systems put people out of work? What will happen if people find it more interesting to communicate with Artificial Intelligence and robots than with other people? In essence, all these people are suggesting that humanity should choose cowardice as a basic strategy in this matter.
Some would say that putting the brakes on AI development until we have calculated all the risks and set all the limits and boundaries is not cowardice but prudent caution. But let's face it. When considering a system as complex as Artificial Intelligence, it is impossible to calculate and anticipate all the risks. If a full-fledged, self-aware AI does emerge, it is highly likely to break through any red lines we may draw - both thin and thick. So basically, the whole issue comes down to a simple choice. We either develop technologies that can lead to full-fledged AI, dealing with the risks and problems that arise along the way, or we give up and restrict progress in this direction as much as possible, as, in fact, we did with genetic modification. Once again, we would like to draw your attention to the fact that we are not calling for humanity to ignore risk; assessing and weighing the risks of AI are important. But we believe that this should be done as the technology develops, not by putting the development on pause.
We do not know what mankind will choose, but we want to believe that our civilization will not turn into a community of cowards. It is the fear we have of AI that seems to be the biggest threat to humanity at the moment, rather than anything coming from intelligent machines.
I want to believe that we will once again find the spirit of curiosity and adventure that made us take the first steps into the uncharted and unknown.
A less talked-about, but no less important, problem that we realized while thinking about a future with full-fledged AI is the danger of awakening in us those manifestations of human nature that are commonly referred to as "base" or "animal" instincts. All that we have managed, with great difficulty, to hide under a thin layer of societal norms and behaviors. Is there a risk that we will really use AI-equipped robots as disenfranchised slaves in hard and dangerous jobs, as sex toys, and in gladiatorial games? Could the future realities from Blade Runner, Westworld, or A.I. Artificial Intelligence really come true? Yes, and it's bound to happen unless we start doing something about it today.
Why are we so sure of this? First of all, because a lot of people don't see anything wrong with the scenarios we have described. Let people "blow off steam" on robots instead of hoarding it all in themselves and taking it out on other people. The opportunities offered by a variety of digital entertainment are often cited as an example. But, most importantly, there is the second reason. Namely, the fact that there are still these mindsets of inhumanity and how people treat each other today. You are aware that slavery, forced sexual exploitation, and other forms of human cruelty still exist on planet Earth, right? Even in countries that are considered civilized. Do you want an example? Please. Two people pummeling each other with their fists in an effort to get their opponent to pass out? You are aware that knockdowns and knockouts in professional boxing, to put it mildly, do not lengthen life or improve its quality? That a knockout surely equals a concussion? And this is just one example of modern "gladiatorial games.” The fact that professional athletes receive substantial salaries does not change the essence of professional sports. At the peak of the popularity of gladiatorial games in Ancient Rome, free citizens sold themselves as gladiators for money and fame. What is there to say about citizens - some Roman emperors even entered the arena!
“So what's the problem?" some readers will ask, or, "Why not eliminate the issue of brutal, forced exploitation of people by substituting smart machines?
We respond: The problem is that if robots were to be equipped with true AI, they would be at least intellectually comparable to humans. Is it ethical to force intelligent creatures, albeit artificial, to become slaves, engage in prostitution, or serve as "whipping boys" for the amusement of fans of this kind of entertainment?
The answer to this question should be provided by lawmakers, preferably before the emergence of a full-fledged AI. Unfortunately, it seems that the parliamentarians of the most technically developed countries of the world have not yet seriously concerned themselves with the problems of the legal basis of relations between humans and Artificial Intelligence. So far, if this issue is discussed, it is discussed by individual lawyers and futurologists as a kind of thought experiment. What is the cause for this reluctance? Is it a failure to realize the importance of the subject or an unwillingness to close the window of opportunity? Whatever the case may be, the fact that politicians ignore this problem is another important piece of the puzzle that makes up the portrait of modern humanity.
I do not want to end on a gloomy note, so let's look for something positive in the aforementioned closet of the soul. It's something like a beautiful vintage toy or a record collection with good old jazz or rock and roll. The emergence of modern chatbots based on LLM has shown that people are quite willing to engage normally in communication with AI. It turns out that users don't limit their interaction with chatbots to asking them to find information, write text, or come up with slogans, headlines, and greeting cards. We are happy to communicate with these digital creations on a variety of topics - discussing our hobbies, personal problems, and abstract issues, and even dialoguing in the same manner as we do with people. Moreover, this dialog is often freer and more frank than interactions with people. Based on this, we can easily assume that a generation of people is already forming today for whom fear of AI will seem ridiculous, and the idea of normal partnership with intelligent machines and granting them equality is completely natural.
To be continued.
The article was created in collaboration with Andriy Tkachenko