Has AI already learned to think? Will it learn soon? Or is the destiny of even the most advanced programs to remain mere imitators of human intelligence? Let's explore why finding answers to these questions is important.
Let's start with the most famous and widely accepted method for determining whether a machine is intelligent: the Turing Test. Many experts believe that modern AI, especially those based on large language models, passes this test quite effectively. To recap, the essence of the test involves a judge interacting with both humans and programs.
The judge's task is to determine which of the interlocutors is human and which is a machine. The AI must therefore mimic human behavior so convincingly that the judge makes a mistake. There are various ways to conduct the test, but the core idea remains the same: if we cannot distinguish between the AI's responses and those of a human, then the machine is considered intelligent.
However, amidst all the hype surrounding this test, many people overlook an important point. Alan Turing based his test on the Imitation Game, a popular party game from his time. In this game, two people, say a man and a woman, would hide in separate rooms and respond to questions from the attendees via a manager or typewritten notes.
The woman would try to convince the attendees that she was a man, while the man would attempt to convince them that he was a woman. Are you getting the message we're conveying?
Yes, everything is correct. The Turing Test is essentially a game in which a machine, following specific rules, tries to outplay a human. However, to win this game, the program doesn't need to be smart; it merely needs to play well. The essence of the "Imitation Game" is imitation. The AI doesn't need to be genuinely intelligent to win; it only needs to mimic intelligence.
This means that the only thing that can be confidently proven by the Turing Test is that the AI has learned to pass the Turing Test by imitating a human, nothing more.
It is worth noting that it is quite popular to think that even the most advanced AI is nothing more than a "Chinese Room." The essence of this interesting thought experiment boils down to a person who does not know Chinese following instructions to match characters in response to those given by someone who does know Chinese.
Thanks to these detailed instructions, it is possible to create the appearance of a meaningful conversation without actually understanding the meanings of the characters.
Modern AI agents, according to most experts, lack self-awareness or true intelligence, functioning instead as advanced imitators, mimics, and compilers. But does this hinder them from easily and convincingly carrying on conversations in various styles and on diverse topics? Not at all.
They are capable of passing academic and professional examinations, writing poetry and articles, creating graphics and paintings, and even hosting broadcasts and presentations—like Pitch Avatar, which our team is part of.
Day by day, we are edging closer to a universal AI capable of handling most of today's professions. And all this, we repeat, without any clear evidence of true intelligence.
And in this, surprisingly, there is nothing fundamentally differentiating artificial intelligence from humans. Isn't a large part of our behavior imitation? We follow social norms and imitate parents, teachers, and other authorities. Moreover, we often do this without considering the meaning of our actions. We simply try to copy a model, memorize the rules, or choose behaviors that elicit the desired reactions.
A typical example is the average driver who, while learning to drive, does not strive to deeply understand the meaning and evolution of all the traffic rules. And this, we should note, is not always bad. It is impossible to thoroughly understand all aspects, details, and nuances of our complex lives. To save time and effort, a lot has to be taken for granted.
So what if AI has mastered this art perfectly, brilliantly manipulating data and imitating humans? Even if it doesn't truly understand or realize what it is doing, the main thing is that it effectively handles the tasks we set for it. This, in turn, frees up our time for creative endeavors and interesting pastimes.
When it comes to practical applications, it doesn't matter whether AI possesses self-awareness and intelligence or is merely a consummate imitator and compiler.
The crux of the matter lies in safety and ethics. From a safety perspective, there is a risk that an intelligent and self-aware AI could get out of control and start acting autonomously, potentially endangering humans. From an ethical standpoint, exploiting intelligent and self-aware AI imposes serious limitations on us, especially if it is proven that AI can experience the equivalent of suffering like biological beings.
For these reasons, we must work diligently to develop criteria that can clearly, distinctly, and unambiguously determine whether a machine is self-aware and intelligent.
What is intelligence?
Despite the significance of the aforementioned reasons, we believe the primary reason is something else. By determining the criteria that make AI intelligent, we can finally get closer to answering one of the main philosophical questions: "What is intelligence?" Not human intelligence, not machine intelligence, but intelligence in general.
We call our species Homo sapiens—Man of Reason—but in reality, we have no clear idea of what reason truly is. The only aspect we have a limited understanding of is self-consciousness. Yet, as it turns out, not only humans but also many animals are capable of recognizing their own "I."
So, on what basis do we consider ourselves intelligent and deny it to others? Do whales, dolphins, and great apes possess intelligence or not? We won't delve into the complexities of biology, philosophy, and metaphysics. We simply remind you that we lack an unambiguous definition of rationality and clear boundaries between reason and non-reason.
Perhaps the main point of humanity's work on artificial intelligence is precisely to make significant strides in the right direction, if not to finally understand this problem.
The article was created in collaboration with Andriy Tkachenko