Artificial intelligence gives humanity tremendous opportunities. With that, great risks. We celebrate online translators, amicable chatbots, well-drawing neural networks, and robot vacuum cleaners. We jubilate everything that makes life easier — not to mention scientific achievements. That said, we seem to be afraid of AI. But why? Is it even justified? Let's find out.
The first research on AI began in the 1930s. At that time, AI was already deemed a computer program able to perform tasks and learn. In 1927, an engineer of the Westinghouse Electric and Manufacturing Company, Roy Wensley engineered the first humanoid robot, Herbert Televox.
It was programmed to answer calls and execute simple commands. Imagine you call home and ask Televox whether the lights are turned off! Or you can request to preheat dinner in the oven, and the robot will turn it on remotely. All that in 1927!
The term artificial intelligence was first used in 1956, at the Dartmouth Conference in Hanover, New Hampshire. At the conference, scientists discussed many areas of AI application, from vision, learning, and research to language, gaming, and human interactions with robots.
Since then, the golden age of AI has started. Artificial intelligence has become the official field of science. AI robots could solve specific tasks. For more accomplished versions, a bigger computing capacity was necessary.
The number of robots — from a vacuum cleaner to a smart kettle — used in daily life was growing. Researchers tried to predict the further flow of things. In 1966, British mathematician Irving John Good in his work ‘Speculations Concerning the First Ultraintelligent Machine’ formulated the principle of technological singularity, or intelligence explosion.
According to this phenomenon, “ultraintelligent machine could design even better machines; there would then unquestionably be the ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”
This work caused a great discussion in society and led to ethical concerns: Is it safe to interact with machines? Where are the boundaries of interactions? Who controls robots?
One of the most famous solutions to the problem was proposed by the American writer and professor of biochemistry Isaac Asimov in 1942. In Asimov’s story ‘Runaround,’ he outlined the three Laws of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
But, many years later, these postulates remained fiction and became the ground for future doubts, fears, and frightening scenarios.
In the 1990s, neural networks became a real breakthrough. They could collect data, form experiences, analyze the received information, and draw conclusions at much higher speeds. Since there were more precise and less provocative definitions for modern technologies, the phrase artificial intelligence lost its relevance.
John McCarthy, an American computer and cognitive scientist, said, ‘As soon as it works, no one calls it AI anymore."
It is human nature to be scared of the new — the law of survival in action. Artificial intelligence provokes a defensive reaction. But it’s not the AI itself that frightens us. It’s the fear of the unknown. Many do not understand artificial intelligence, its processes and its consequences.
People are also scared lest artificial intelligence replace them at work.
Here also comes media. In movies, books, and media, AI is often an evil genius who wants to destroy humanity. At the same time, there are many stories of human interaction with technology:
Smart machines appeared in fantasy long before they did in real. The word ‘robot’ was first used in the science-fiction play R.U.R. by Karel Čapek in 1920. Čapek’s robots were very close to the modern perception of androids.
People view AI as a living organism capable of evoking feelings, if not to replace a person. We can see that in the picture ‘Her.’ Moreover, we no longer perceive technologies like Siri, Mercedes, or chatbots as AI. However advanced, machines do not have a nervous system.
Even a perfectly accurate emulation of the human brain in an electronic version cannot make a robot feel and empathize.
Mikalai Dzisko*, Senior Editor at Ocean Power, says:*
‘Journalists also distort the image of neural networks and artificial intelligence. Instead of educating their audience and showing them how to use the neural network, they mouse around for intolerance and unethical behavior in ChatGPT. Will it write the f-word? When will it offend a user? Is it possible to bring it into conflict? How many genders does the neural network admit? These questions blur the reputation and essence of the technology.
Despite widespread fear, some movies, books, and people support a positive reputation of neural networks:
Artificial intelligence is often mentioned in gender studies. One of the most famous feminists of the last century, Donna Haraway, believed that cyborgization would help overcome binary, with all the ensuing problems and stereotypes.
Today, human rights defenders are moving on. The non-profit organization Cyborg Foundation raises the questions of people who can already be called cyborgs.
Its co-founder, artist Neil Harbisson, calls himself the first cyborg in the world because he implanted a special antenna in his skull that converts light waves into sound.
‘Should they have a license to use the technology? Their international law? What should cyborg insurance look like? Who will protect the civil liberties and inviolability of cyborg bodies?’
asks Eliezer Yudkowsky, co-founder and research fellow at the Machine Intelligence Research Institute
Stephen Hawking made a huge contribution to the popularization of the thesis ‘Technology is man's best friend.’ The famous physicist and cosmologist publicly spoke in favor of research and the development of artificial intelligence. Hawking believed that, with proper use, AI can help people solve many global problems.
By personal example, he showed that a serious illness is no longer an obstacle to active scientific and public activity the moment you let technology into your life and body.
Transhumanists preach that the main task of a person is to improve the quality of their own life and overcome illness and death. Technology should be the best helper in this process.
Cyborgization, regenerative medicine, various digital practices — virtual reality, metaverses — all this should make a person happy.
Sounds defiant, doesn’t it? On the other hand, technology has been simplifying our lives for a long time. We just do not notice it: cars, microwaves, phones, the Internet, search engines, and so on.
There is also a more radical approach, posthumanism. This philosophy is based on the consideration that man has reached the peak of their evolution in the Darwinian understanding and, to reach the next level; it is necessary not just to apply the product of his intellectual activity, but to directly implement it into life: to implant a chip in the brain to improve vision or hearing.
This can also include biohacking, improving health indicators with the help of medicines, a proper lifestyle, and, of course, technologies, such as chips that read blood pressure, temperature, and sugar levels. Not the treatment of diseases, but the upgrade of the body and its own capabilities.
Both trends are mostly theoretical, but the three main ideas of transhumanism seem intriguing. The first is super longevity. The belief that life is finite, and even more so limited by a conditional century, is a cultural stereotype that forces a person to distribute events within the time allotted to a human: to get married before the age of 25, to give birth before 30, to reach success before 45.
What if you imagine that life is 200, or even 250 years? How will our attitude towards the time change? To your own health and biological age? What if we don't rush, try to do more, and adjust our lifestyle to this idea?
A second duty of a transhuman is to raise the standard of living by reducing stress. Stress has become the main disease of the 21st century, the primary source of other problems and illnesses.
A high standard of living is not only material goods, it is primarily harmony with oneself. This is what transhumanism adherers suggest to focus on.
The third thesis of transhumanism is as follows: a person must combine the abilities of the brain and technology. Disturbing, right? In fact, we have been delegating some brain functions to neural networks for a long time: memory, sensory organs, and biological clocks, to name a few.
We get up on the alarm clock, we do not waste time trying to remember the actor's name, we trust the route suggested by the navigator — and we are unlikely to follow it on our own.
The neural network helps eliminate some processes, change our needs, reduce the time for completing tasks, and manage to do the important ones.
In theory, it is possible to achieve superintelligence by combining the capabilities of the human brain and a computer. Special implants could stimulate brain function, allow you to access the Internet, and interact with computers or other implant carriers. Today, such technologies are used only for medical purposes.
For example, scientists at Stanford University have developed implants that can stimulate individual parts of the brain with the help of electrical impulses and thus help address depression or Parkinson's disease.
All these utopian developments can inspire or make you think about improving your life. In the fight against AI anxiety, you just need to start. The first piece of advice unanimously given by those who work with neural networks is to stop calling this technology artificial intelligence.
Intelligence — the defining characteristic between a human and an animal — remains man's main strength. There are no competitors on the horizon so far.