paint-brush
How ELIZA’s Success Revealed the Pitfalls of Machine Credibilityby@machineethics

How ELIZA’s Success Revealed the Pitfalls of Machine Credibility

by Machine EthicsSeptember 11th, 2024
Read on Terminal Reader
tldt arrow

Too Long; Didn't Read

Joseph Weizenbaum’s ELIZA was intended as a tool to study human interpretation of AI, but it ended up being misconstrued as an example of true AI. Despite its role in AI research, ELIZA’s success in creating an illusion of understanding led to the exact confusion Weizenbaum aimed to avoid. This misinterpretation, compounded by the shift from SLIP to Lisp and later developments in AI, underscores the ongoing issue of attributing human-like qualities to machines and the potential dangers of such misconceptions.
featured image - How ELIZA’s Success Revealed the Pitfalls of Machine Credibility
Machine Ethics HackerNoon profile picture

Abstract and 1. Introduction

  1. Why ELIZA?
  2. The Intelligence Engineers
  3. Newell, Shaw, and Simon’s IPL Logic Theorist: The First True AIs
  4. From IPL to SLIP and Lisp
  5. A Critical Tangent into Gomoku
  6. Interpretation is the Core of Intelligence
  7. The Threads Come Together: Interpretation, Language, Lists, Graphs, and Recursion
  8. Finally ELIZA: A Platform, Not a Chat Bot!
  9. A Perfect Irony: A Lisp ELIZA Escapes and is Misinterpreted by the AI Community
  10. Another Wave: A BASIC ELIZA turns the PC Generation on to AI
  11. Conclusion: A certain danger lurks there
  12. Acknowledgements and References

12 Conclusion: A certain danger lurks there

Weizenbaum’s goal to use ELIZA as a platform for the study of the human process of interpretation was thwarted by exactly what he did not want to see; The “kludge” [in nearly the original sense of the term] supplanted the reality. Instead of using ELIZA as a tool to study interpretation and interaction with AI, it became a cause celebre in-and-of-itself, the DOCTOR script being the only one ever seen, because it was so good for such a simple program – exactly the opposite of Weizenbaum’s point! Moreover, exactly what Weizenbaum did not want to happen with regard to Lisp vs. SLIP came to pass; instead of people using Fortran to build complex AI programs. Nearly everyone in AI, or involved in symbolic and/or list processing turned to Lisp, and SLIP eventually faded away.


In 1950, Alan Turing wrote:


“I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent, chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs. The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any unproved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.”[44, p. 442]


A mere decade and a half later, Joseph Weizenbaum wrote:


“With ELIZA as the basic vehicle, experiments may be set up in which the subjects find it credible to believe that the responses which appear on his[17] typewriter are generated by a human sitting at a similar instrument in another room.”[47, p. 42]


Regardless of how close this description may seem to Turing’s test, Weizenbaum did not build ELIZA to pass that test. Rather he built it to run experiments, including ones akin to the Turing Test, that could be used to study human interpretive processes (especially in the case of artificial intelligence). He believed that this research was critically important, but not merely for the academic reasons that motivated Turing and most AI researchers:


“[The] whole issue of the credibility (to humans) of machine output demands investigation [...] [Important] decisions increasingly tend to be made in response to computer output. The ultimately responsible human interpreter of ’What the machine says’ is, not unlike the correspondent with ELIZA, constantly faced with the need to make credibility judgments. ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility. A certain danger lurks there.”[47, pp. 42–43]


The problem of how humans impute agency, correctness, and intelligence to machines is not only still present, but has become exponentially more important in recent years, with the widespread diffusion of internet bots and large language models. Our modern computational lives might have been better had Weizenbaum pursued his goal of using ELIZA to study people’s interpretive interaction with computers, and especially with AIs. Unfortunately, his fear that “[t]here is a danger [...] that the example will run away with what it is supposed to illustrate”[47, p. 43] was too prescient.


Author:

(1) Jeff Shrager, Blue Dot Change and Stanford University Symbolic Systems Program (Adjunct)( [email protected]).


This paper is available on arxiv under CC BY 4.0 license.

[17] Even for the time, it is striking that Weizenbaum constantly refers to ELIZA’s interlocutors as male, given that the only example he provides of a conversation with ELIZA is (putatively) with a woman!