Recall (from McCorduck) that Weizenbaum was connected to AI through a number of paths, including Kenneth Colby, a Stanford psychiatrist who was interested in modeling neurosis and paranoia[22], and Ed Feigenbaum, a computer scientist at Berkeley. Feigenbaum had been a student of Simon’s, and was creating various AI programs in IPL.[26] Once Weizenbaum got to MIT, he became associated with Project MAC, which was begun, in part, by John McCarthy, the inventor of Lisp and the person who coined the term “Artificial Intelligence”[6].[3]
But Weizenbaum was, first and foremost, what we would now call a software engineer[4], having just come from GE where he worked on highly practical programs. There were several projects mounted to build on the clear successes of Newell and Simon’s IPL work without having to cope with its ugliness and inefficiency. As mentioned above, there were already several much more programmer-friendly languages, notably Fortran and COBOL. But these languages were aimed at science, engineering, and business, and did not provide the AI-related functionalities of IPL, such as symbol processing, lists, and recursion. So the question naturally arose as to how to add these capabilities to those already-existing languages
The inventive entanglement between Fortran, IPL, and Lisp is concisely captured in a brief mention by Gelernter and coworkers in creating FLPL, the “Fortran List Processing Language”:
“[...] consideration was given to the translation of a JOHNNIAC IPL for use with the IBM 704 computer. However, J. McCarthy, who was then consulting for the project, suggested that Fortran could be adapted to serve the same purpose. He pointed out that the nesting of functions that is allowed within the Fortran format makes possible the construction of elaborate information-processing subroutines with a single statement. The authors have since discovered [...] the close analogy that exists between the structure of [a Newell, Shaw, and Simon] list and a certain class of algebraic expressions that may be written within the language. [...] Not to be overlooked is the considerable sophistication incorporated into the Fortran compiler itself, all of which carries over, of course, into our Fortran-compiled list-processing language. It is reasonable to estimate that a routine written in our language would run about five times as fast as the same program written in an interpretive language [like IPL].”[27, pp. 88–89]
Like Gelernter, Weizenbaum implemented a set of IPL-inspired list-processing facilities as a set of Fortran-(and later MAD-)-callable functions, which he called SLIP. In creating SLIP, Weizenbaum took a route similar to Gelernter’s. In his 1963 paper, Weizenbaum (in addition to citing IPL and FLPL as influences) critiques McCarthy’s Lisp (although not mentioning it by name):
“List processing has won a number of dedicated converts. Some have, however, become somewhat too fervent in their advocacy of list processing. While there may be some programming tasks which are best solved entirely within some list processing system, most tasks coming to the ordinary programmer require the application of a number of distinct techniques. The packaging of a variety of tools within a single tool box appears to be a good, if not an optimum, way of outfitting a worker setting out to solve complex problems. FORTRAN, ALGOL, and other languages of the same type provide excellent vehicles for such provisioning. Apart from the fact that they are very powerful in themselves, they have the advantage that they are well known. The task of coming to grips with these new techniques is then that of adding to a vocabulary of an already assimilated language rather than that of learning an entirely new one.”[49, pp. 535–536]
The acknowledgement in the 1963 SLIP paper is worth quoting in full, as it makes many of the connections explicit:
“[...] SLIP owes a considerable debt to previous list processing systems. Certain of its features are, however, more the result of attempts to build a symbol manipulator for the use of behavioral scientists than to generalize other processors. In this connection, the continuing and generous advice and support of Kenneth Colby, M.D., of Stanford University and of Dr. Edward Feigenbaum of the University of California at Berkeley is gratefully acknowledged. The author also wishes to thank Howard Sturgis of the University of California at Berkeley and Larry Breed of Stanford University for their parts in making the system operative on the computers at their respective computation centers.”[49, p. 536][5]
Although the SLIP paper makes no mention of ELIZA, nor of any specific application, this acknowledgement makes clear that SLIP was motivated by “previous list processing systems” (specifically IPL-V, and FLPL), as a “symbol manipulator for the use of behavioral scientists”, and that SLIP was running on computers at both Stanford and Berkeley. Although the SLIP paper explicitly cites Gelernter’s FLPL, it is likely, given the above acknowledgement, close timing, and length of publication cycles, that SLIP was developed essentially simultaneously with FLPL, and that the citation was a publication-sequence nod, rather than FLPL having directly influenced SLIP’s development. Whereas Weizenbaum originally embedded SLIP for Fortran, the SLIP that ELIZA is written in was embedded in MAD (although using the same underlying foreignfunction calling machinery as Fortran – an intermediate language called “FAP” – the Fortran Assembly Program). Intriguingly, this SLIP implementation for the 7090 MAD programming language came from Yale[42, p. 62L1], suggesting some interaction with Gelernter, although the details are obscure.
Notable from the above is McCarthy’s involvement with FLPL, and especially the point made by McCarthy about functional nesting of Fortran statements. McCarthy’s own approach to creating a high-level AI language was very different from the one he recommended to Gelernter, which was also adopted by Weizenbaum. In addition to his interest in formal matters of logic and mathematics, McCarthy was coincidentally playing around with Church’s lambda calculus, which, as discussed above, was the motivation for Turing’s work that led to universal computers. This aligned nicely with the functional nesting pointed out by McCarthy to Gelernter. Recursion is central to both universal computation and AI.[32] McCarthy, along with his students, most notably, Steve Russell, figured out, perhaps coincidentally, how to bring list processing and recursion together into an elegant programming paradigm that was far simpler and more elegant than IPL, FLPL or SLIP; thus, the birth of Lisp, among the world’s most influential and enduring programming languages.[19]
Author:
(1) Jeff Shrager, Blue Dot Change and Stanford University Symbolic Systems Program (Adjunct)( [email protected]).
This paper is
[3] Although McCarthy moved to Stanford in 1962, two years before Weizenbaum arrived at MIT.[7]
[4] That term was coined in the late 1960s.[18]
[5] Symbols in Lisp and IPL-V are named pointers and those languages, being self-contained, have specific support for symbol manipulation. Because SLIP is embedded in another language (initially Fortran, later MAD in which ELIZA was implemented), the naming, should the programmer choose to do so, results from Fortran assignment to variables. It would require another whole paper to track the meaning of “symbol” through the history of programming languages, AI, and cognitive science.