Interpretation is the process through which humans – one might say, intelligences of whatever sort, or, more generally, “cognitive agents” – assign meaning to experiences. By “meaning” here we mean approximately what linguists mean by the term “semantics” or cognitive scientists mean by “mental models”. It is the ability to assign meaning, separate from the experience itself, and thence to draw inferences based upon this assignment, that enables cognitive agents to, at the same time, abstract away from the specifics of experiences, and reason about those specifics.[7]
The importance of interpretation has been rediscovered in both psychology and AI dozens of times under almost as many names, including putative mental structures such as scripts, plans, frames, schemas, and mental models (e.g., [39], [28]), and putative mental processes such as analogy, view application, commonsense perception, analogy, and conceptual combination or conceptual blending (e.g., [9]). In AIs driven by Artificial Neural Networks (ANNs), the structures and processes involved in interpretation are diffuse, but these systems engage none-the-less in interpretation.
Interpretation is also central to Colby’s work on paranoia. Indeed, Colby describes one of his earliest implementations of paranoia as precisely interpretation gone awry: “A parser takes a linear sequence of words as input and produces a treelike structure [...] The final result is a pointer to one of the meaning structures which the interpretation-action module uses in simulating paranoid thinking for both the paranoid and the nonparanoid modes.”[21, p. 520]
Weizenbaum, who it will be recalled had worked with Colby, explicitly recognized that ELIZA had no interpretive machinery - no way to assign meaning to the content of the conversation. Indeed, he chose the Rogerian framework precisely for the reason that that framework (or at least Weizenbaum’s gloss of it) puts almost all of the content work on the patient/user, who presumably has intact interpretive machinery: “This mode of conversation was chosen because the psychiatric interview is one of the few examples of categorized dyadic natural language communication in which one of the participating pair is free to assume the pose of knowing almost nothing of the real world.”[47, p. 42] Weizenbaum even points to this as the thing that needs most to be improved in ELIZA in order to create a fuller conversant: “In the long run, ELIZA should be able to build up a belief structure [...] of the subject and on that: basis detect the subject’s rationalizations, contradictions, etc.”[47, p. 43]
Yet, at the same time, interpretation is central to Weizenbaum’s work, but not the interpretations that ELIZA makes, which are, per Weizenbaum himself, non-existent, but the interpretations made by the human interlocutors with ELIZA (or, as described above, with the gomoku player).
Author:
(1) Jeff Shrager, Blue Dot Change and Stanford University Symbolic Systems Program (Adjunct)( [email protected]).
This paper is
[7] Of course, terms such as “meaning” and “experience” are vague, and in a paper focused on interpretation these would need to be specified more carefully, but they are not central to the present exploration. For a broad review, see [8].