paint-brush
Can Code be Self Aware? Musings on Studying Computer Science at MIT (Part I)by@rizstanford
2,858 reads
2,858 reads

Can Code be Self Aware? Musings on Studying Computer Science at MIT (Part I)

by Rizwan VirkDecember 16th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

I remember first thinking about self-awareness, like many things, from watching too much science fiction. Was Data, the android in Star Trek: The Next Generation, a conscious being, or just a “machine”?

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Can Code be Self Aware? Musings on Studying Computer Science at MIT (Part I)
Rizwan Virk HackerNoon profile picture

I remember first thinking about self-awareness, like many things, from watching too much science fiction. Was Data, the android in Star Trek: The Next Generation, a conscious being, or just a “machine”?

OK robots are one thing — they have a physical presence and whether or not they are conscious most machines will have some basic self-diagnostic capabilities and be aware of the state of their “physical body”. Even a car is now “aware” if its door is open or not or if the seat belt is on.

There are examples of robots passing the Mirror Test (read about it here), which was created by Gallop in 1970 to see if an animal (or a species of animals) could recognize themselves. According to Gallop, most infants didn’t pass this test until they were at least 18 months old — which is the point at which some level of “self-awareness” exists (at least as measured by this test).

What about Code?

But the question here is not so much can a robot be aware of itself, but can a computer program be aware of itself? In ST:TNG was the “computer” on the Enterprise, which was just software, self-aware? That’s another question altogether.

It wasn’t until I was studying computer science and learning all about programming and recursion at MIT that I really began to think about the question of what it means for code to be aware of itself.

As usual, it was science fiction that gave me a kick in the pants to think about this question.

In 1991, nearing the end of my time studying computer science at MIT, the movie Terminator 2 came out, and it had a line that just wouldn’t leave me.

Skynet becoming self-aware wasn’t a good thing … but what does it mean?

The Terminator in this incarnation (played by Arnold Schwarzenegger), was a friendly one (having been reprogrammed by John Conner in the future). Sarah Conner (Linda Hamilton) was interrogating him about the development of Skynet (the AI which tries to wipe out humans in the future):

Terminator: The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate.

Anyone who has watched these movies knows what happens next, as Arnold explains in the now famous line:

Terminator: It becomes self-aware at 2:14 a.m. Eastern time, August 29th.

And of course, the inevitable response that Elon Musk and others, ranging from Oxford philosopher Nick Bostrom, MIT physicist Max Tegmark, to Stephen Hawking, are warning us could happen in our future.

Terminator: In a panic, they try to pull the plug.

Sarah Conner figures out what happens next all on her own:

Sarah Conner: Skynet fights back …

Doomsday scenario aside, I remember wondering what it would mean for Skynet, which was an AI (and as far as I could tell, a collection of code), to be self-aware?

Was it even possible for code to be “aware” of itself?

Self Awareness and Killer Programs … Maybe?

Earlier that year (or maybe it was the year before, I can’t remember), I stumbled rather unexpectedly into a similar question during one of our computer engineering classes, 6.004 (Computation Structure), which is the course in the EECS department where we learned to build a computer from the ground up, called a Maybe machine.

We carried a little brown suitcase around campus that housed a collection of wires and digital circuitry, which were basically linked almost one for one with the assembly code we were learning (I had done a little assembly level programming with Apple II computer back home but had no idea the little instructions I was coding gave direct instructions to the processor to do X or Y in the physical registers!).

In the class we had setup a primitive operating system and we were learning about multiple processes running, when the instructors decided to hold a competition. The competition was between two programs would be running as separate processes on the same system — a battle of the programs if you will. The goal was for our computer program to battle, or disable, the other computer program.

The basic design of most computers today gets its history from Von Neumann, who envisioned memory and storage as key parts of the computer. A program that was running would have its instructions stored in memory, along with the data that the program was processing. In a system that was running more than one program (think multiple windows on your macbook), the system would swap out one program for the other. This basic structure — of code and data being stored in the same way — had interesting consequences, as we’ll see later in this article.

One very cold, snowy night that year I chugged my maybe machine/suitcase from the lab across campus, accompanied by my classmate Ranjan, who was singing, as he did pretty much the whole semester when we carried the maybes around campus (“come on bay-beee … don’t say maybeeee”).

I said good night to Ranjan and went back to my dorm room to formulate a plan of how I could write a program that would compete. As I recall, the competition was an optional assignment, but I thought since I was doing pretty well in the class, I would give it a shot.

It was obvious that for one program to kill the other, it would need to be aware of the other program and where it was running in memory. I wrote a little program that figured out which program was running, it would then find the location of the “other” program in memory, and then simply write over it.

This little program started to raise at first, practical questions, and that led to some philosophical questions in my mind. It didn’t modify the “other” program on disk (actually we didn’t have disks but some kind of EPROMs, as I recall — this was 25 years ago though) but modified it in memory. But how could I be sure that the opposing program would stop running?

My first solution was to write a random number over the next instruction that was supposed to be executed. Then I realized that a random number didn’t guarantee that the program would stop running, it would simply make the program do something random (let’s not get started about the fact that “random” isn’t really “random” in computers).

Instead, I made it simply write a “zero” as the next instruction that the “other” program should executive, which was a signal to “stop” running the program.

Carrying around Maybe Machines in the Snow at MIT

At about 3 am, as it often did in those days, philosophy started to creep in, and I started to wonder if the code I was writing was “aware” of itself? In a sense it had to be, and this awareness was coded into the program itself…otherwise it couldn’t judge which program it was overwriting! It might overwrite itself!

At about 4 am, I realized that it was possible for my program code to rewrite itself! Normally a program written in a language like C or BASIC or Java won’t modify itself. It hadn’t even occurred to my young programming mind that this could be done, but with assembly code it was just like changing any other value in memory.

I would later realize it’s a fairly well studied area of computer science called self-editing programs. The only reason it’s possible, I realized was that at some level, the Von Neumann architecture of both data and code (which are represented as numbers/codes in memory) are stored in the same way in memory.

I quickly re-coded my program so that it would simply write a zero as the next instruction for both programs — editing my own program and the opposing program. Since my program had to be in memory while it ran, it would be swapped out of memory when the other program ran. And if I had overwritten the next instruction for the opposing program, the only thing that program would do is “end”.

Can a program overwrite itself? It turns out it’s not that hard!

Contrary to my original worry, my program wouldn’t kill itself — I was overwriting the PC (or the program counter), or rather the location the PC pointed to in memory. The next time my program run, the PC would increment and the next instruction would just start running. The other program, which had to not be running while my program was running, would of necessity pick up at the location the PC pointed to for itself.

By doing this, in two simple instructions, my program was about as efficient as it could be, and the rest was gravy — I could make the next instruction add something like “rizkill wins”.

In fact, the little program (yes I called it “rizkill”) won the programming competition, killing all of my classmates poor programs who all took way more than 2 assembly code instructions to achieve anything.

Then on a lark, the TA’s decided to run rizkill against the graduate student TA’s programs, and though it handily defeated most of the TA’s programs, there was one clever TA who had used the same trick, so it was a tossup which one of our program would win any given session (astute observers will note that it simply depended on which process was running first).

I was declared the winner from the undergrads and won a signed copy of Hackers, by Stephen Levy, as my prize (unfortunately it was signed by our professor, Stephen Ward, a well-known computer science guy, though I would’ve preferred to have it signed by the author, Stephen Levy, but that’s another issue altogether!).

Self and Copies of Self

You could argue that neither version of my program was really “self-aware”, even though I would argue that it was definitely “self-referencing” which is not quite the same thing. But it certainly got me thinking about other software programs that were “self-referencing” and whether this was a precursor to a kind of “self-awareness”?

You could program a piece of software to do a virtual version of Gallup’s Mirror test — to see if the code would recognize some mark placed on itself. How? In fact, since I had written a self-modifying program, I could’ve used various techniques for the program to “check itself” to see if it had been modified, using a virtual “mirror” of sorts.

Most likely, a checksum or some type would do the trick, or it could have a copy of its own code somewhere to compare itself to.

There is a whole class of programs in computer science that “reference” their own code — quines are self-replicating programs which spit out a version of its source code as its only output. Douglas Hofstadter, who we will talk more about later, defined the term “quine” in his classic work, Gödel Escher Bach (or GEB for short), although the ubiquitous von Neumann was among the first to talk about these types of programs.

Can a program output a copy of its self?

Programmers will recognize that a “quine” would be a program which has its source code in a string that can be printed out.

There are many examples of quines on Wikipedia and elsewhere. A simple one, written in python is: (for more, see https://en.wikipedia.org/wiki/Quine_(computing) )

s = ‘s = %r\nprint(s**%%**s)’

print(s%s)

Of course there are many variations on the simple quine– most computer programs that replicate themselves would replicate their object code which doesn’t require having the source code available in a string (as shown above) or de-compiling the code. This is called metamorphic code, which puts out some “version” of itself, usually an object code version.

We all know of self-replicating programs, which are commonly called “viruses” because of their behavior of copying themselves onto your machine (and on to other machines) without your knowledge.

The Mirror Test for a Computer Program

Would it be possible to write a program that could pass a “virtual version” of the Mirror test? I.e. if you placed a mark on the program (altering it in some way), could you have a program that would be aware of it?

It would be like writing a program that basically does this in every statement- it would be much harder to do this in a multi-program environment like we were running, but let’s ignore that for now.

A robot looking in the mirror

Some simple pseudocode:

//at beginning of the program

Checksum=X ;(this would probably need to be stored on disk somewhere, let’s ignore how the program would calculate its own checksum)

//later on, perhaps every so often:

Y = getCurrentChecksum()

If (X != Y)

Print (“I have been modified …. My pretty!”)

Else

… // go on with whatever the program is meant to do

Checksums are used mostly for error detection, particularly when downloading large amounts of data or code. In fact, if you download Microsoft office let’s say, you might be downloading over 1 Gig of data — it’s frequently broken down into smaller chunks, each chunk with a checksum.

OK so theoretically you could write a program that was aware of its “past state” and whether it has been modified, but this really doesn’t seem to rise to the level of “self-awareness” the way we think about It in Artificial Intelligence.

Recursion, Self-Referencing, and Context

Let’s return to the question at hand: Are any of these programs “aware” of themselves at some level? Or are they just “self-referencing”?

Getting back to my computer science education at MIT, I realized that we started learning self-referencing programs pretty early on.

It turns out that our introductory software class at MIT, 6.001 (“Structure and Interpretation of Computer Programs”), unlike at many other universities, used the language of Scheme (a version of LISP), which had almost no syntax/structure or pre-defined commands at all compared to most programming languages.

Unlike freshmen who were learning to program in practical languages like C (at the time, and java in the future) at other universities, we were writing heavily recursive programs right off the bat in a language that almost no one used (and that I haven’t needed again in my 25 years of programming!).

Still, this lack of structure made it essential to understand recursion very early on.

Recursion is when a program calls itself to complete a smaller version of the problem and is a pretty standard technique used in computer programs. Let’s say you want to raise 3 the 4thpower. You express this as 3 * 3 * 3 * 3, but you can also express it as 3 * ( 3 raised to the 3rdpower). In the parenthesis is a smaller version of the exact same problem. And that smaller problem could be expressed as 3 * (smaller version of the problem, or 3 raised to the 2ndpower).

In pseudocode, you might use the following code snippet for a simple recursive approach to calculating powers or exponents of some base value:

Function recursiveExponent ( int base, int exponent) returns int

{ If (exponent == 0)

Return 1;

Else

Return (recurviseExponent( base, exponent-1);

}

Is the program aware of itself? It is clearly calling itself, but does it know it is calling itself? Or is it just calling another program?

Recursion works in a loop and descends until it reaches a base level that returns a concrete value. In this case, any number to the 0thpower is 1, so that’s the “base” of the recursion.

If we took out the base line of code, we’d just have a function calling itself:

Function recursiveExponent (int base, int exponent) returns int

Return (recursiveExponent( base, exponent-1);

You would have an infinite loop — the exponent would keep descending becoming a negative integer. Leaving aside the mathematical interpretation, on most computers this wouldn’t go forever, depending on the operating system you were running it on, it might reach the limits of how much memory was allocated for the program. It would get the dreaded stack “overflow” error — this is an artifact of how programs are run today that allows for recursion, a logical “stack” (which is a first in-last out type of data structure). Each time a new function is called (whether recursive or not) a new “object” is created on the stack which becomes the “context” for this particular running of the function.

If the recursive program goes on forever, it would eventually use up all the memory required for the stack.

These are specific to recursive programs because an iterative version of the same program, written with a loop wouldn’t keep creating memory on the stack — it would just keep running.

An iterative version of the same program might run forever (or at least up to the exponent), but doesn’t have any self referencing going on:

Function iterativeExponent(base, exponent)

{int result = 1;

While (i=0; I <=exponent; i++)

Result =result * base;

}

This might eventually cause problems if “exponent” is unusually large, as the value of “result” grows with each iteration and in reality, variables have size limits.

But I digress. Back to the recursive program.

However, suppose we changed the program to call a different function (that maybe had a similar code base):

Function recursiveExponent ( int base, int exponent) returns int

{If (exponent == 0)

Return 1;

Else

Return (recurviseExponent2( base, exponent-1);

}

Now the program isn’t calling itself but is calling another function.

Does the program know the difference?

Now you could say it is neither self-aware nor self-referencing. Still, the second program might call the first and then you have a (slightly) more complex recursive system which is still self-referencing, so the problem remains — is the system that is self-referencing, at some level, aware of itself?

A Diversion: Recursive Chess and how I (finally) solved the 8 Queen Problem

Recursion is a pretty powerful technique, though it’s more resource intensive than an iterative approach, and it turns out most problems can be solved by either approach when you really think about it.

The real power of recursion is that it teaches the idea of “abstraction” or “black box” of a smaller problem — rather than figuring out the whole problem you let “someone else” figure out a part of the problem and then simply “add your part of the solution” on top of it with a simple calculation.

What makes it recursive, rather than just calling someone else to solve the smaller problem, is that in recursion that “someone else” is really just yourself. It would be like cloning yourself and asking this “smaller version” of yourself to solve the smaller problem and hand you the solution.

When I was in high school learning programming, I didn’t really understand recursion or use it much. This was primarily because I was using Applesoft BASIC, and while loops are easy and were taught in those days early on, recursion is not so easy in a procedural language rather than an object based language.

My friend Gus (who was on the computer programming team at the science Olympics with me), and I had stumbled onto a book of mathematical problems that could be solved by computer programs (this was pre-internet, and we had to actually go buy the book, or get it from the library!). One of their exercises was the following problem, known as the 8 queens problem: which had stumped our rapidly growing but still immature computer programming skills:

A chessboard has 8 rows and 8 columns (64 squares) — it’s possible to arrange 8 queens on this chessboard such that no queen is able to kill any other queen. Write a program to find one configuration. Extra Credit: Write a program to find every configuration.

Today you can do a quick search on the internet for various solutions, with a whole wikipedia page dedicated to the 8 queen problem: “this is a a good example of what’s called a simple but non-trivial problem”. The brute force method shows there are 4,426,165,368 (i.e., 64 C 8)possible blind placements of eight queens. There are exactly 92 solutions to this problem, and it’s possible to simplify by ruling out configurations which have queens on the same row or column, so theoretically you should be able to get there without having to go through very single possible placement.

A solution for the 8 queens problem

But in those days, we couldn’t look it up on wikipedia, and had to dive right in. It doesn’t seem like a particularly complicated problem at first — you just write a loop within a loop to figure it out. Or so we thought. When Gus and I tried to solve it using simple iteration it got hairy real fast, or became so computationally intensive that the solution would take forever using Applesoft BASIC. We didn’t know about stacks and frames of context that are an essential part of recursion.

When I went to MIT that fall, and took the first programming class, 6.001, (and were doing recursion up the yin-yang), I realized that the 8 queen problem could be solved by a smaller version of the same problem: a 8x8 chess board can be arranged such that 7 queens can be put on the chess board such that none can attack each other. If you could solve that problem, then you could just put the 8thqueen in the empty column/row. The 7 queen problem of course could be solved the same way by solving the 6 queen problem, and so on.

My memory is a little fuzzy on the complications that arise when you do this, but I do remember going home that summer and showing Gus my solution (which only solved the problem of finding one configuration, rather than all), we were both convinced that recursion was the most powerful way to solve any problem in computer science! (NOTE: Gus’s name changed to protect the innocent).

Next Part, Part 2: Self Aware Code vs. Data and Strange Loops and AI (coming soon!)

If you liked what you read, please clap 20x and give me a follow on Medium.