paint-brush
Weizenbaum’s Gomoku and the Art of Creating an Illusion of Intelligenceby@machineethics
New Story

Weizenbaum’s Gomoku and the Art of Creating an Illusion of Intelligence

by Machine EthicsSeptember 10th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Weizenbaum’s 1962 paper on gomoku highlights his concern about AI's ability to create an illusion of intelligence. He critiques how complex programs can fool users into perceiving intelligence, emphasizing the importance of human interpretation over the technical details of the algorithm.
featured image - Weizenbaum’s Gomoku and the Art of Creating an Illusion of Intelligence
Machine Ethics HackerNoon profile picture

Abstract and 1. Introduction

  1. Why ELIZA?
  2. The Intelligence Engineers
  3. Newell, Shaw, and Simon’s IPL Logic Theorist: The First True AIs
  4. From IPL to SLIP and Lisp
  5. A Critical Tangent into Gomoku
  6. Interpretation is the Core of Intelligence
  7. The Threads Come Together: Interpretation, Language, Lists, Graphs, and Recursion
  8. Finally ELIZA: A Platform, Not a Chat Bot!
  9. A Perfect Irony: A Lisp ELIZA Escapes and is Misinterpreted by the AI Community
  10. Another Wave: A BASIC ELIZA turns the PC Generation on to AI
  11. Conclusion: A certain danger lurks there
  12. Acknowledgements and References

6 A Critical Tangent into Gomoku

Before getting to ELIZA itself, It will be useful to take a brief look at an obscure 1962 paper of Weizenbaum’s – his first – published in the trade magazine, Datamation.[48] Still with GE at the time, this brief paper has the odd and telling name: “How to make a computer appear intelligent” [emphasis as in the original]. The article, only a bit over two pages long, reports a simple strategy for playing gomoku – a GO-like game. Weizenbaum didn’t actually write the program described in the paper, although he apparently designed the algorithm[6] Regardless, Weizenbaum’s interest in this program is in not the algorithm itself, which he describes as “simple”. Rather, he is interested in the fact that the program was able to “create and maintain a wonderful illusion of spontaneity” (p. 24). Indeed, the paper (again, only a bit over two pages long) spends the first half page presenting a caustic screed against AI that is worth reproducing in full:


“There exists a continuum of opinions on what constitutes intelligence, hence on what constitutes artificial intelligence. Perhaps most workers in the fields of heuristic programming, artificial intelligence, et al, now agree that the pursuit of a definition in this area is, at least for the time being, a sterile activity. No operationally significant contributions can be expected from the abstract contemplation of this particular semantic navel. Minsky has suggested in a number of talks that an activity which produces results in a way which does not appear understandable to a particular observer will appear to that observer to be somehow intelligent, or at least intelligently motivated. When that observer finally begins to understand what has been going on, he often has a feeling of having been fooled a little. He then pronounces the heretofore “intelligent” behavior he has been observing as being “merely mechanical” or “algorithmic.” The author of an “artificially intelligent” program is, by the above reasoning, clearly setting out to fool some observers for some time. His success can be measured by the percentage of the exposed observers who have been fooled multiplied by the length of time they have failed to catch on. Programs which become so complex (either by themselves, e.g. learning programs, or by virtue of the author’s poor documentation and debugging habits) that the author himself loses track, obviously have the highest IQ’s.”[48, p. 24]


The paper then goes on to describe the “simple” algorithm in typical algorithmic terms, and at the end, far from closing the loop on his opening salvo, simply suggests some ways to potentially improve the program’s play.


This curious paper provides a deep and interesting insight into the dual forces tearing at Weizenbaum and leading to ELIZA. He is, first and foremost, a software engineer. However, here we have a paper about a game-playing program that begins with a screed about how AI engineers are out to fool people. Weizenbaum clearly has a direct interest in AI, and he does not think much of it. Specifically, he is concerned about how easily people can be “fooled” by complex programs into the “illusion” of intelligence. A more generous and interesting way to put this is that Weizenbaum is focused on the users, not on the programs. Specifically, what is interesting to him is not AI per se, which Weizenbaum has only a little to say about, and not much good, but the human psychological process of interpretation – in this case, how the users interpret a “simple algorithm” as playing intelligently.


Author:

(1) Jeff Shrager, Blue Dot Change and Stanford University Symbolic Systems Program (Adjunct)( [email protected]).


This paper is available on arxiv under CC BY 4.0 license.

[6] “A program implementing the strategy here outlined has been written by R. C. Shepardson”[48, p. 26]