paint-brush
Interpretation is the Core of Intelligenceby@machineethics
120 reads

Interpretation is the Core of Intelligence

by Machine EthicsSeptember 10th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Interpretation, the process of assigning meaning to experiences, is central to both human cognition and AI. While ELIZA lacked interpretive machinery, the interpretations by human users are crucial. Weizenbaum highlighted the need for improved interpretive capabilities to create more effective conversational AI.
featured image - Interpretation is the Core of Intelligence
Machine Ethics HackerNoon profile picture

Abstract and 1. Introduction

  1. Why ELIZA?
  2. The Intelligence Engineers
  3. Newell, Shaw, and Simon’s IPL Logic Theorist: The First True AIs
  4. From IPL to SLIP and Lisp
  5. A Critical Tangent into Gomoku
  6. Interpretation is the Core of Intelligence
  7. The Threads Come Together: Interpretation, Language, Lists, Graphs, and Recursion
  8. Finally ELIZA: A Platform, Not a Chat Bot!
  9. A Perfect Irony: A Lisp ELIZA Escapes and is Misinterpreted by the AI Community
  10. Another Wave: A BASIC ELIZA turns the PC Generation on to AI
  11. Conclusion: A certain danger lurks there
  12. Acknowledgements and References

7 Interpretation is the Core of Intelligence

Interpretation is the process through which humans – one might say, intelligences of whatever sort, or, more generally, “cognitive agents” – assign meaning to experiences. By “meaning” here we mean approximately what linguists mean by the term “semantics” or cognitive scientists mean by “mental models”. It is the ability to assign meaning, separate from the experience itself, and thence to draw inferences based upon this assignment, that enables cognitive agents to, at the same time, abstract away from the specifics of experiences, and reason about those specifics.[7]


The importance of interpretation has been rediscovered in both psychology and AI dozens of times under almost as many names, including putative mental structures such as scripts, plans, frames, schemas, and mental models (e.g., [39], [28]), and putative mental processes such as analogy, view application, commonsense perception, analogy, and conceptual combination or conceptual blending (e.g., [9]). In AIs driven by Artificial Neural Networks (ANNs), the structures and processes involved in interpretation are diffuse, but these systems engage none-the-less in interpretation.


Interpretation is also central to Colby’s work on paranoia. Indeed, Colby describes one of his earliest implementations of paranoia as precisely interpretation gone awry: “A parser takes a linear sequence of words as input and produces a treelike structure [...] The final result is a pointer to one of the meaning structures which the interpretation-action module uses in simulating paranoid thinking for both the paranoid and the nonparanoid modes.”[21, p. 520]


Weizenbaum, who it will be recalled had worked with Colby, explicitly recognized that ELIZA had no interpretive machinery - no way to assign meaning to the content of the conversation. Indeed, he chose the Rogerian framework precisely for the reason that that framework (or at least Weizenbaum’s gloss of it) puts almost all of the content work on the patient/user, who presumably has intact interpretive machinery: “This mode of conversation was chosen because the psychiatric interview is one of the few examples of categorized dyadic natural language communication in which one of the participating pair is free to assume the pose of knowing almost nothing of the real world.”[47, p. 42] Weizenbaum even points to this as the thing that needs most to be improved in ELIZA in order to create a fuller conversant: “In the long run, ELIZA should be able to build up a belief structure [...] of the subject and on that: basis detect the subject’s rationalizations, contradictions, etc.”[47, p. 43]


Yet, at the same time, interpretation is central to Weizenbaum’s work, but not the interpretations that ELIZA makes, which are, per Weizenbaum himself, non-existent, but the interpretations made by the human interlocutors with ELIZA (or, as described above, with the gomoku player).


Author:

(1) Jeff Shrager, Blue Dot Change and Stanford University Symbolic Systems Program (Adjunct)( [email protected]).


This paper is available on arxiv under CC BY 4.0 license.

[7] Of course, terms such as “meaning” and “experience” are vague, and in a paper focused on interpretation these would need to be specified more carefully, but they are not central to the present exploration. For a broad review, see [8].