paint-brush
Why You Shouldn't Fear Artificial General Intelligenceby@sshwartz
177 reads

Why You Shouldn't Fear Artificial General Intelligence

by Steve ShwartzNovember 24th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Many are concerned that AI will progress to the point where evil robots gain artificial general intelligence and take over the world. This will not happen, says Andrew Hammond, author of AIPerspectives.com which contains a free, 400-page AI 101 book. Hammond: Today’s AI systems are all narrow AI systems, a term coined in 2005 by futurist Ray Kurzweil to describe just those differences: machines that can perform only one specific task. Humans and fictional intelligent robots also reason based on our knowledge of the world, he says.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Why You Shouldn't Fear Artificial General Intelligence
Steve Shwartz HackerNoon profile picture

Many are concerned that AI will progress to the point where evil robots gain artificial general intelligence and take over the world. This will not happen.

AI has blasted its way into the public consciousness and our everyday lives. It is powering advances in medicine, weather prediction, factory automation, and self-driving cars. Even golf club manufacturers report that AI is now designing their clubs.

Every day, people interact with AI. Google Translate helps us understand foreign language webpages and talk to Uber drivers in foreign countries. Vendors have built speech recognition into many apps. We use personal assistants like Siri and Alexa daily to help us complete simple tasks. Face recognition apps automatically label our photos. And AI systems are beating expert game players at complex games like Go and Texas Hold ’Em. Factory robots are moving beyond repetitive motions and starting to stock shelves.

The recent progress in AI has caused many to wonder where it will lead. Science fiction writers have pondered this question for decades. Some have invented a future in which we have at our service benevolent and beneficial intelligent robots like C3PO from the Star Wars universe. Others have portrayed intelligent robots as neither good nor evil but with human-like frailties such as the humanoid robots of Westworld which gained consciousness and experienced emotions that caused them to rebel against their involuntary servitude. Still, other futurists have foreseen evil robots and killer computers — AI systems that develop free will and turn against us like HAL of 2001: A Space Odyssey and the terminators of the eponymous movie franchise.

Speculation about the potential dangers of AI is not limited to the realm of science fiction. Many highly visible technologists have predicted that AI systems will become smarter and smarter and robot overlords will eventually take over the world. Tesla founder Elon Musk says that AI is humanity’s “biggest existential threat” and that it poses a “fundamental risk to the existence of civilization.” The late renowned physicist Stephen Hawking said, “It could spell the end of the human race.” Philosopher Nick Bostrom, who is the founding director of the Future of Humanity Institute, argues that AI poses the greatest threat humanity has ever encountered — greater than nuclear weapons.

Artificial General Intelligence vs. Narrow AI

The AI systems that these technologists and science fiction authors are worried about all are examples of artificial general intelligence (AGI). AGI systems share in common with humans the ability to reason; to process visual, auditory, and other input; and to use it to adapt to their environments in a wide variety of settings. These fictional systems are as knowledgeable and communicative as humans about a wide range of human events and topics.

There are two striking differences between fictional AGI systems (i.e. fictional evil robots) and today’s AI systems: First, each of today’s AI systems can perform only one narrowly defined task. A system that learns to name the people in photographs cannot do anything else. It cannot distinguish between a dog and an elephant. It cannot answer questions, retrieve information, or have conversations. Second, today’s AI systems have little or no commonsense knowledge of the world and therefore cannot reason based on that knowledge. For example, a facial recognition system can identify people’s names but knows nothing about those particular people or about people in general. It does not know that people use eyes to see and ears to hear. It does not know that people eat food, sleep at night, and work at jobs. It cannot commit crimes or fall in love.

Today’s AI systems are all narrow AI systems, a term coined in 2005 by futurist Ray Kurzweil to describe just those differences: machines that can perform only one specific task. Although the performance of narrow AI systems can make them seem intelligent, they are not.

In contrast, humans and fictional intelligent robots can perform large numbers of dissimilar tasks. We not only recognize faces, but we also read the paper, cook dinner, tie our shoes, discuss current events, and perform many, many other tasks. Humans and fictional intelligent robots also reason based on our commonsense knowledge of the world. We apply commonsense, learned experience, and contextual knowledge to a wide variety of tasks. For example, we use our knowledge of gravity when we take a glass out of the cupboard. We know that if we do not grasp it tightly enough, it will fall. This is not conscious knowledge derived from a definition of gravity or a description in a mathematical equation; it’s unconscious knowledge derived from our lived experience of how the world works. And we use that kind of knowledge to perform dozens of other tasks every day.

New AI Paradigms

The big question is whether today’s narrow AI systems will ever evolve into intelligent robots with artificial general intelligence that can use commonsense reasoning to perform many different tasks.

Most of today’s breakthrough AI systems used a form of machine learning named supervised learning in which the goal is to learn a function that identifies output categories from inputs. For example, a facial recognition system takes as input an image and identifies the name of the person in the image. The same is true for reinforcement learning, where the goal is to learn a function that can predict the optimal action for a given state.

Geoffrey Hinton has said that he has doubts that current paradigms, including supervised learning, reinforcement learning, and natural language processing, will ever lead to artificial general intelligence (and the evil robots of science fiction lore). In a 2017 interview, Hinton suggested that to get to artificial general intelligence will likely require throwing out the currently dominant supervised learning paradigm and the efforts of “some graduate student who is deeply suspicious of everything I have said.” Yann LeCun has also said that supervised learning and reinforcement learning will never lead to artificial general intelligence because they cannot be used to create systems that have commonsense knowledge about the world.

Some AI researchers are starting to speculate about new approaches. When we evaluate the viability of these new approaches, it is important to remember that enthusiasm for the narrow AI accomplishments should not translate into optimism about these new approaches, because the existing narrow AI approaches are a dead-end in terms of building AGI systems.

Learning like People

Many researchers describe human learning as compositional: We learn many building block skills that we then put together to learn new skills. People learn concepts, rules, and knowledge about the world that transfer over as we learn to do different tasks. These researchers argue that the key to commonsense AI reasoning (and to artificial general intelligence and evil robots) is to build systems that learn compositionally like people. The idea is for systems to learn concepts and rules that can serve as building blocks that enable the system to learn higher-level concepts and higher-level rules.

My biggest concern about this approach is that progress in understanding how people represent commonsense knowledge has been glacial. Forty years ago, we had a long debate about the nature of the internal representations people use to answer questions like “What shape is a German Shepherd’s ears?” We still do not know the answer, even though some of the top people in the fields of AI and cognitive science took part in the debate. Answering a question about the shape of a dog’s ears is just a drop of water in an ocean of representational schemes and reasoning processes. Moreover, we do not even know whether these representational schemes and reasoning processes are innate or learned. Innateness has been an ongoing topic of academic debate for over fifty years, with no resolution in sight.

How long will it be before we know enough about how people think to make real progress toward artificial general intelligence and evil robots? At the current rate of progress, it appears we will need hundreds — maybe thousands — of years, and it may never happen.

Deep Learning

Some researchers argue that while supervised and reinforcement learning per se are dead ends for building artificial general intelligence systems, deep learning may yet take us to the promised land. Both Yann Lecun and Greg Brockman have proposed scaling unsupervised learning systems in the hope that they will magically acquire commonsense knowledge about the world and learn to reason based on that knowledge.

GPT-3 is a great example of scalability. It is more than a hundred times larger than the GPT-2 system which itself was ten times larger than the original GPT system. GPT-2 showed a surprising ability to generate human-sounding — if not always coherent — text and GPT-3 generated even better text. The OpenAI researchers behind the GPT systems see this as an emergent capability that resulted solely from making the network bigger.

GPT-3 certainly demonstrated a massive ability to extract statistical regularities of its training text and perhaps the ability to memorize small segments of the text. However, it did not learn facts about the world or gain any ability to reason based on this world knowledge. At this stage of the game, I see absolutely no evidence that learning world knowledge and reasoning skills will emerge from this approach, and I see no logical rationale for believing it will happen.

Yoshua Bengio has proposed novel deep learning architectures designed to break deep learning out of its narrow AI box. One goal is to learn higher-level building blocks that can help AI systems learn compositionally. This is an interesting but very early stage idea. Here again, the idea that systems like this will magically learn commonsense knowledge and reasoning is a leap of faith.

Modeling the Human Brain

Another proposed approach to artificial general intelligence is to understand the architecture of the physical human brain and model AI systems after it. After decades of research, we know only some very basic facts about how the physical brain processes information. For example, we know that the cortex statically and dynamically stores learned knowledge, the basal ganglia processes goals and subgoals and learns to select information by reinforcement learning, and the limbic brain structures interface the brain to the body and generate motivations, emotions, and the value of things.

The idea of modeling the neurons in the brain has been in the proposal stage for over forty years. It has yet to gain any real traction partly because of the extremely slow progress in understanding the human brain and partly because we have no concrete method for modeling what we know about the human brain in AI programs. Here again, we are near the starting gate and have no evidence that this approach will succeed.

Faster Computers

Ray Kurzweil, a technology futurist, has long argued that artificial general intelligence will occur as a by-product of the trend toward bigger and faster computers. He popularized the idea of the singularity, which is the point in time that computers are smart enough to improve their own programming. Once that happens, his theory states, their intelligence will grow exponentially fast, and they will quickly attain a superhuman level of intelligence. Kurzweil predicted that the singularity would occur around 2045.

That said, it is hard to imagine how processing power by itself can create artificial general intelligence. If I turn on a computer from the 1970s with no programs loaded, turn on one of today’s computers with no programs loaded, or turn on a computer fifty years from now with no programs loaded, none of these computers will be capable of doing anything at all. If I load a word processing program on each of these computers, then each of them will be limited to performing word processing. Newer, more modern computers will be able to respond faster and process bigger documents, but they will still only be capable of word processing. The same will be true for the computers of the future.

Faster computers by themselves will not result in artificial general intelligence. As Steven Pinker said, “Sheer processing power is not a pixie dust that magically solves all your problems.” In the unlikely event that artificial general intelligence ever becomes possible, the programming and learning algorithms will likely be complex enough to require extremely powerful computers. However, those programming and learning algorithms will be necessary; speed and power will not be sufficient.

Will We Achieve Artificial General Intelligence?

The technology behind narrow AI systems cannot progress to artificial general intelligence and evil robots. There are several ideas for how we might get to AGI but these are all vague ideas. Since the late 1950s, AI researchers have had many ideas for how to create AGI. None has panned out. There is absolutely no evidence today’s ideas will fare any better.

Both the optimism and fear of achieving artificial general intelligence are grounded in the success of narrow AI systems. This optimism for narrow AI has naturally, but incorrectly, spilled over to optimism about prospects for AGI. As Oren Etzioni, the CEO of the Allen Institute for AI, said, “It reminds me of the metaphor of a kid who climbs up to the top of the tree and points at the moon, saying, ‘I’m on my way to the moon.’”

You probably do not expect time travel to happen in your lifetime. You probably expect it to remain in the realm of science fiction for hundreds of years, if not forever. You probably feel the same way about warp speed, putting people into hibernation, invisibility, teleportation, uploading a person’s mind into a computer, and reversing aging. You should put artificial general intelligence and evil robots in this same category.

Previously published at https://aiperspectives.com/dont-fear-artificial-general-intelligence/