paint-brush
The Intelligence Engineers: How Turing and Lovelace Laid the Foundations for AI's Futureby@machineethics
New Story

The Intelligence Engineers: How Turing and Lovelace Laid the Foundations for AI's Future

by Machine EthicsSeptember 10th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Alan Turing’s Universal Machine and Ada Lovelace's insights into symbolic computing laid the foundation for AI. This legacy influenced early AI projects, including ELIZA.
featured image - The Intelligence Engineers: How Turing and Lovelace Laid the Foundations for AI's Future
Machine Ethics HackerNoon profile picture

Abstract and 1. Introduction

  1. Why ELIZA?
  2. The Intelligence Engineers
  3. Newell, Shaw, and Simon’s IPL Logic Theorist: The First True AIs
  4. From IPL to SLIP and Lisp
  5. A Critical Tangent into Gomoku
  6. Interpretation is the Core of Intelligence
  7. The Threads Come Together: Interpretation, Language, Lists, Graphs, and Recursion
  8. Finally ELIZA: A Platform, Not a Chat Bot!
  9. A Perfect Irony: A Lisp ELIZA Escapes and is Misinterpreted by the AI Community
  10. Another Wave: A BASIC ELIZA turns the PC Generation on to AI
  11. Conclusion: A certain danger lurks there
  12. Acknowledgements and References

3 The Intelligence Engineers

The founding father of the effort to build intelligent machines was, of course, Alan Turing (although Ada Lovelace may have been the founding mother, as will be seen shortly). Turing is publicly most famous for having led the team that built the “bombe”, the machine that made it possible to decipher enemy messages in the second World War.[24] However, in academic circles, Turing is best known for the mathematical construct that now bears his name, the Universal Turing Machine. And in AI circles – and recently in public, as AI has come more into public focus – Turning is associated with the Imitation Game, which we now call the Turing Test.[44]


Since we are discussing ELIZA, which was apparently a chatbot, the reader might expect me to go straight to the Turing Test. However, I am concerned here primarily with ELIZA as a computational artifact, not with whether ELIZA was or was not intelligent; We can all agree that it was not, and we shall see later that Weizenbaum himself had specific reasons for not thinking of ELIZA as intelligent. Thus the Turing Test plays little role with respect to ELIZA, at least as it that program was conceived by Weizenbaum, so we shall leave it aside in the present exploration, except for one important detail that arises, almost in passing, in Turing’s Mind paper.


Turing notes that over a century before his effort, Ada Lovelace had described the potential of Babbage’s Analytical Engine to “act upon other things besides number[s]”:


“The operating mechanism [...] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine. Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent [...]”[23]


We shall soon see that Ada’s insight that machines could act upon other things besides numbers was enormously prescient, foreshadowing the concept of symbolic computing, which was to become, and remains today, one of the foundations of artificial intelligence, and which is central to the pre-history of ELIZA.


But first let us return to Turing’s mathematical contribution, Turing’s Universal Machine, the now so-called “Turing Machine”. The Turing Machine was Turing’s contribution to a line of inquiry in the early part of the 20th century which included Turning, Kurt Godel, and Alonso Church, all addressing a problem posed in 1928 by Hilbert and Ackerman called the “Entscheidungsproblem”.[3] Hilbert’s challenge was to find an algorithm to determine whether a mathematical proposition is provable. Godel addressed this problem through what we now call “Godel Numbering”, his method of turning mathematical expressions into numbers, and showing that there are expressions that cannot be proved in a complete system. Church reached the same conclusion by formalizing general recursive functions in a system called the “Lambda Calculus”. And Turing addressed the problem by describing a universal machine that can compute any function. Turing then showed, equivalently to Godel and Church, that there are programs for which it is impossible to prove that they will come to a halt on his machine, which we now call “the halting problem”.[37]


The connection between Turing’s interests in universal computers, computable functions, and intelligence is obvious: If a machine can compute any function (leaving aside provability, which is not prima facie relevant as regards human intelligence), and intelligence is supposed to be some sort of function (which may be separately debated, but which was assumed by Turing), then if one’s goal is to understand intelligence and perhaps even to create an intelligent machine, it is important to be able to tell when you’ve succeeded – thus, Turing’s invention of The Imitation Game, now famously called the “Turing Test”.


Turing’s Universal Machine was a theoretical construct. But in 1943, he designed and succeeded in building, with other engineers at Bletchley Park, a near-instantiation of his theoretical machine, called Colossus. Dyson observes that Colossus “was an electronic Turing machine, and if not yet universal, it had all the elements in place.”[24, p. 256] The Turing Machine, and its instantiation in Colossus, represented an extremely important advance in computing, even though computing was only in its infancy. Up until that time, all computers were “hardwired” – that is, they executed a program that was wired into their hardware. The bombe was of this type. The important conceptual advance due to Turing, and at about the same time John von Neumann in the EDVAC project, was to store the program on a medium that is not fixed, but can be changed by the program itself. In Turning’s machine (and Colossus) this was a tape. In von Neumann’s case it was an electronic memory[4]. This way of thinking about computers – as “stored program”, as opposed to “hardwired” – was the engineering revolution hidden in Turing’s theoretical construct – that computers could not only do complex calculations, but that they could manipulate their own programs, much like intelligent agents engage in reasoning, planning, and other meta-cognitive activities wherein we think about and modify our own thoughts. We now call such machines “von Neumann-style” machines, although they could just as reasonably, and perhaps even more so, be called “Turing-style”.


Author:

(1) Jeff Shrager, Blue Dot Change and Stanford University Symbolic Systems Program (Adjunct)( [email protected]).


This paper is available on arxiv under CC BY 4.0 license.