In the early 1980s, AI was a hot topic — just like it is now. Back then, nearly every software product was re-branded as containing some form of AI, and the hype was out of control. This re-branding is happening again in 2020.
In 1976, my Yale University colleague, Professor Drew McDermott, chastised our AI colleagues in an article entitled “AI Meets Natural Stupidity.”
In that article, McDermott took issue with the names that his colleagues were using for their AI systems. For example, an AI system named General Problem Solver was developed in 1959 and was a pioneering technological achievement, but its performance fell far short of its grandiose name.
He implored his colleagues to use more “humble, technical” names that do not anthropomorphize these systems.
McDermott was concerned that the use of grandiose terms would increase the hype around AI, the result would be overly high expectations, and that it would all end badly.
And that’s in fact what happened. By the end of the 1980s, AI had fallen out of favor because the reality did not live up to the hype. Many AI companies went out of business.
Stuart Russell said that his AI course at the University of California at Berkeley had 900 students in the mid-’80s and had shrunk to only 25 students in 1990.
McDermott’s criticism is as applicable to today’s AI systems than it was forty years ago. Let’s look at some of the terms in everyday use today in the AI community:
Learning: Researchers apply the term “learning” (e.g., as in “machine learning”) to AI systems. When a child figures out that “1+1=2”, we call it learning.
So, when an AI system learns to add two numbers, shouldn’t we call that learning also? Absolutely not! The problem is that adding two numbers is the only task that the AI system will ever learn.
For a child, learning to add two numbers is part of a lifelong process of learning that the child can apply to many different tasks and contexts. It is misleading to equate what machines and people do by using the term “learning” for both.
Planning and Imagination: People start using their imaginations from the minute they wake to the minute they go to sleep. They imagine what will happen if they let the dog out and a cat is in the yard.
They imagine what will happen if they go out in the rain with and without an umbrella. They imagine what their black shirt will look like with their tan pants. When we pick up our clothes or make the bed, we imagine the resulting improvement in the appearance of our living quarters. If you think about it, you will find that you use your imagination in many different ways.
A self-driving car is said to have an “imagination” because it can “learn” to project where all the other vehicles and pedestrians will be a few seconds into the future. It “imagines” that future state.
However, projecting that future state is the only task the machine learning system can perform. It cannot imagine anything else. So, machine imagination is really nothing like human imagination and should not be given a label that suggests it is.
Inference: In machine learning, the term “inference” refers to taking an algorithm (e.g., logistic regression or a deep neural network) and applying it to previous unseen instances.
However, machine learning systems can only perform the “inference” step for a single well-defined task. In people, inference refers to the result of commonsense reasoning, which is a generic capability that humans apply across many different tasks and environments. Computers don’t have generic inference capabilities like people.
Forty years ago, when McDermott made his comments, I had the impression that most people in the AI community agreed with what he said, and yet they still opted not to change their terminology.
The world would have a much less optimistic view and a lot less fear of the possibility of evil robots and killer computers if today’s researchers and vendors took McDermott’s recommendations to heart.
Please enter any comments below and visit AI Perspectives where you can find a free online AI 101 textbook with 15 chapters, 400 pages, 3000 references, and no advanced mathematics.