The Road to Killer AI: ML + Blockchain + IOT + Drones == Skynet?

Written by rizstanford | Published 2018/05/05
Tech Story Tags: artificial-intelligence | singularity | robots | blockchain | iot

TLDRvia the TL;DR App

How Today’s Hot Buzzwords Could Lead to Tomorrow’s Nightmare

Lately, there has been a lot of concern about the recent explosion of AI, and how it could reach the point of 1) being more intelligent than humans, and 2) that it could decide that it no longer needs us and could in fact, take over the Earth.

Physicist Stephen Hawking famously told the BBC: “The development of full artificial intelligence could spell the end of the human race.” Billionaire Elon Musk has said that he thinks AI is the “biggest existential threat” to the human race.

Computers running the latest AI have already beaten humans at games ranging from Chess to Go to esports games (which is interesting, because this is a case where AI could be better than humans at playing games which were built as software from the ground up, unlike Chess and Go, which were developer before the computer age).

AI has been making dramatic leaps over the past few years — the question that Hawking and Musk are asking is: Could AI evolve to the point where it could replace humans?

In the Terminator Universe, Skynet takes over and has both Terminators and H-K’s

The Nightmare Scenario in Science Fiction

If this scenario sounds like science fiction, it’s one that science fiction writers have posed again and again.

One of the most popular is of course, Skynet, the intelligence that takes over in the Terminator universe and decides to wipe out most of humanity and enslave the rest (except for the resistance fighters, led by John Conner, but that involves a terminator travelling back in time, and time travel will be handled in another essay).

In the Matrix, super-intelligent machines have enslaved the human race

In perhaps equally popular trilogy of the Matrix, super-intelligent machines take over the planet as well, but rather than killing humans they enslave them in a unique way. In order to ensure that electricity generated by the human brain can be put to use, the super-intelligent machines put humans in pods, keeping our minds busy playing a giant video game or simulation (i.e. the Matrix). [If you like haven’t yet, you might want to read my article “The Simulation Hypothesis: Why AI, Quantum Physics and Eastern Mystics Agree We are all in a Giant Video Game”].

In Frank Herbert’s Dune, we find that there are no computers, and humans are trained to perform computing tasks. In fact, there is a super-important law from the Orange Catholic Bible: Thou shalt not make a machine in the likeness of a human mind.

Why? It turns out that that in the distant past (from the point of view the of the novel Dune, but our distant future) humanity was enslaved by, you guessed it, a super-intelligent machine! After the Butlerian Jihad, where humans revolted and defeated Omnius, the almost omniscient machine that had many copies and had enslaved humanity. Never again!

Variations of this theme, where there isn’t a single killer AI but robots or androids that have the potential to take over the human race range are also plentiful. In Battlestar Galactica, where the Cylons, which were created by man, rebel and try to kill the human race. Westworld, and Blade Runner explore similar themes.

Could this “Nightmare Scenario” Really Happen?

I would argue that on the one hand, it’s a long road from today’s AI to this kind of nightmare scenario, where killer AI takes over and either kills or enslaves the human race.

Today’s AI is not well suited to the task of killing or enslaving the human race, and it would take more than just advances in AI and ML (Machine Learning) to get there, it would take advances in many other areas of computing software and hardware.

On the other hand, these “other areas” are actually progressing rapidly and we can hear the buzzwords all around us in the tech world: blockchain, peer-to-peer computing, IOT (Internet of Things), robots and drones. This rapid progression could make the long road from today’s specialized AI to killer AI much, much shorter.

What would it take to get from where we are today to this nightmare scenario? As purely an academic exercise, I’d like to explore the steps and milestones along this dangerous path.

The Gates of Killer AI

For this “nightmare scenario” to plausibly occure, there are a number of pre-requisites. I call these the “Gates of Killer AI” — if we see our technology “opening” these gates — then perhaps it’s time to start heeding the warnings of the likes of Hawking, Musk, others.

This list isn’t complete (i.e. there are other Gates that would have to be opened along the way), but it’s probably a good minimal list.

The “Gates of Killer AI” don’t have to happen sequentially — they can be opened in any order. The best way to illustrate the “Gates of Killer AI”, since they are currently closed, is to use science fiction references so we can visualize what it might involve, and to use today’s software and hardware as a comparison.

The 4 Gates of Killer AI are:

  • GATE #1: AI and ML which is general purpose and not specific. Most of the AI that is out there today, even if it uses generic algorithms, is trained by using data for very specific tasks — whether for playing a game, driving a car, predicting stock market models, communicating, analyzing images better than doctors, making diagnosis, etc. Today’s AI is the second wave of AI (data-driven machine learning) whereas the first wave was more heuristic (rule-driven). Tomorrow’s AI might combine elements of these two approaches with other approaches to become more general purpose. This Gate brings up a broader discussion about what is AI vs. ML and how general our existing AI can become.
  • GATE #2: AI which can easily interface with the physical world. AI might be just software running on a server, but that doesn’t mean that it has an awareness of the physical world (think robots and self-driving cars) nor does it have access to weapons (think drones or H-K, hunter-killers from the Terminator universe). This Gate brings up the bigger discussion of IOT (the internet of things), Machine Vision, and how this expands the ability of any AI to interface with the physical world as well as robots that have access to weapons.
  • GATE #3: AI which does not have an off switch. In some ways this may be the most important gate of the four; if this gate isn’t crossed, then even if an AI moves from benevolent to malevolent (at least form a human point of view), it’s relatively easy to either “shut it down” by using an off switch, or destroy the physical micro-processors and code/virtual machines that are hosting the AI. This Gate brings up the broader discussion that since all AI today relies on some form of computer technology, will blockchain and peer-to-peer systems be a precursor to a system that is “spread all over” and which cannot be shut down?
  • GATE #4: AI which is self-aware and prioritizes survival. This gate is a little bit harder to define. What does it mean to be self-aware? What does it mean to prioritize its own survival? What other values or priorities might an AI have? It’s not a given that AI has a will to survive, so this Gate brings up a broader discussion about values and AI.

Let’s examine these Gates in some detail.

Gate #1: From Specific to General: The Waves of AI

I would assert that the current wave of AI, which is focused on Machine Learning (ML), while an improvement on the first wave of AI, which was based on rules and heuristics, is still not sufficiently advanced to take us into the nightmare scenario.

When I was studying computer science at MIT, I remember being told that AI research was initially about trying to find “rules” and “symbolic representations” that mimicked the human mind. This became difficult to do, since humans are good at recognizing patterns and exceptions to rules and computers weren’t. This led to the creation of Fuzzy Logic, which were rules that were a little less explicit. In the 1980’s, Japanese researchers predicted they would have AI which was as good as human in many tasks by the end of that decade.

Thirty years later, we are just starting to get there. Today’s explosion of AI and ML are more data-driven and less rule-driven. They rely on ideas based on neural networks. A neural network is a logical system that tries to replicate the neurons in the human mind. This type of AI, rather than being set up with rules, is fed data for a specific task, and it uses that data to change the weights of specific connections in the neural network. Multiple layers of “virtual neurons” are used and the larger the data set to train the network, the better outcome that it might have.

This type of Machine Learning has proven very good at training for recognizing handwriting for example, or recognizing certain types of images, etc. Some computers have become better at playing certain games, like Go or Chess, for example, than most humans.

While back-propagation neural network algorithms can be thought of as generic, most AI we use today is still trained only on specific tasks. which determine the weight.

Self-driving cars rely on large data sets from the real world, and use a combination of the “training data” and “rules about driving”. When Google was testing its self-driving cars, it found that one of the major problems was not that the car didn’t follow the rules, but that humans didn’t! For example, most human drivers rarely came to a full stop at a stop sign, while the self-driving car was waiting for the other driver to come to a complete stop.

Most AI that is presented in science fiction has “magically” overcome this gate. Most AI that is used today in real world applications has not. A general purpose AI could mean AI which has passed the Turing test (more on this later).

A general purpose AI would not only be able to be used in any application, it could learn on its own, unsupervised, new things which are not related to the previous things that it’s been trained on.

In Star Trek: The Next Generation, Data was a General Purpose AI

The “positronic” brain system that the android Data uses in Star Trek: The Next Generation is an example of a general-purpose AI. AI by itself doesn’t always learn and evolve beyond the initial parameters.

When will Gate #1 be crossed? I don’t have an estimate yet but you can imagine a scenario where an AI that has been trained on many tasks is then trained on which specific AI subset to use. This “mini-version” of the Gate may be opened in a few years, while we may be decades away from a general purpose AI like data, which will probably require a new, third wave of AI, that goes beyond simple rules or simple data/training sets.

GATE #2: AI which can easily interface with the physical world

In a recent episode of the X-Files (season 11, E6), we see a scenario that is closer to our current evolution of technology and AI and which showed the frightening possibilities of tying AI to the physical world.

In this episode, Mulder and Scully are chased by AI which controls everything around them, including self-driving uber-like cars, refrigerators, delivery drones from Amazon, and of course, the cooking in the restaurant where the problem starts. They try to evade the AI by getting rid of their phones and car keys so there would be no way for the AI to track them. In the end, they realize that the “controlling” AI was because they didn’t leave the robot cooks a “tip”. This was also a form of training the AI’s to behave a certain way.

In the X-Files Robot Chefs interface with every physical thing in our world to torment Mulder and Scully

Even today, AI which has been integrated into physical devices, like self-driving cars, do have a limited awareness of the physical world and can issue commands based on this awareness. This field depends heavily on the evolving field of machine vision, relying on cameras and images, then interpreting what those objects are (street signs, clothes, houses, and most importantly, people).

Since this gate includes interfacing with the physical world, it naturally assumes some kind of robots, but we are still far away from the Terminator or replicants in Blade Runner or hosts in Westworld.

Today, many physical robots are specialized and do only one thing. At a modern auto plant, like the Tesla factory for example, there are robots which pick up cars and put them down on the other side of an aisle to continue the assembly lines. These robots are aware of their environment in a limited way and perform only specific tasks.

Robots that can interface with the physical world need not be human-like. The robots from Boston Dynamics (which was acquired by Google and now owned by Softbank) are frightening in their appearance, more like robotic animals which can move extremely fast in the physical world.

The “Big Dog” robot from Boston Dynamics can move extremely fast in the physical world

Autonomous drones are probably the closest to having this ability. In the Terminator fictional world, Skynet is put in charge of all US Military missions, after it shows an efficiency level not possible with human pilots.

Today’s military drones are being piloted from far away, often on the other side of the planet. To the drone operators, it sure looks like a a video game. As we mentioned, AI is already showing signs of becoming better than humans at playing video games — it’s only a matter of time that these two areas are combined, creating AI that is not only better at targeting locations, but is armed and able to do so autonomously without human intervention.

When will Gate #2 be Crossed? This Gate is being opened slowly year by year. My estimate is that it’s only a matter of a few years (or a decade at most) before AI can interface fully with the physical world, using machine vision and machine learning that has been trained to recognize and make decisions about almost all aspects of the physical world.

When will AI have access to weapons? Vladimir Putin has already gone on record saying that the nation that has the best AI will rule the world. This means years, not decades.

GATE #3: AI which does not have an off switch

This gate is the one that’s becoming more difficult with time as technologies like IOT and Blockchain have emerged. The android Data had an off-switch in Star Trek: The Next Generation, which was on the back side of his head. More importantly, Data allowed Captain Picard and others on the Enterprise crew use it when necessary to shut him off.

In Termnator 3, Skynet was not on any given server and couldn’t be turned off!

Once again, I’d like to turn to science fiction. In Terminator 3: Rise of the Machines, the heroes (John Conner, played by Nick Stahl), and his Terminator break into the command center at China Lake in hopes of destroying the server.

Once they get inside, John has the realization that he can’t destroy Skynet simply by turning off the server.

“By the time Skynet became self-aware it had spread into millions of computer servers across the planet. Ordinary computers in office buildings, dorm rooms; everywhere. It was software; in cyberspace. There was no system core; it could not be shut down.”

-John Conner, Terminator 3

That’s what makes Skynet the type of intelligence that we should be afraid of — intelligent software that could run on any device, replicated and spread out across the world, and which can’t be turned off easily.

As the IOT (the Internet of Things) grows, we are adding more and more machines with processors that could run software. At the end of HBO’s show Silicon Valley’s fourth season, for example, the team needed to preserve some data that was on a set of physical servers in the team’s garage that were frying away. Pretty soon they would lose the data. Their AI decided to send the data across 30,000 “smart refrigerators” which were connected to the internet, to preserve the data.

While this might look like a good thing — having processing power and data on devices (millions of devices), it brings up an interesting point for the replication of data across the world on mobile phones, refrigerators, and other things.

As we look at computer viruses, which are programs designed to spread, you realize that they were originally tied to the hardware. By tying a virus or computer program to specific hardware, you have the ability to neutralize it. When you write a virus you typically write it in C and then compile it on the OS and hardware you are working on.

If a Skynet-like program is ever to be built, it needs to be able to survive on multiple devices across the world and not subject to an “easy fix” or a simple shutdown of a particular type of OS.

Enter blockchain. The idea of blockchain is that there are multiple, de-centralized peer to peer computers around the world all replicating a set of data and code that is being used to construct and validate the “blockchain”.

While Bitcoin was the original blockchain technology, its core code (the “miners”) were written in C and could be compiled for different operating systems but they only did very specific things — i.e. transactions consisted of transferring bitcoin from one address to another. There was actually a scripting language but it was very limited in what it could do, primarily as a way to define conditions of release.

One of the reasons that Vitalik Buterian, who was on the dev team for Bitcoin, decided to pursue Ethereum was that he wanted a full, Turing-complete language that would run the same code on hundreds, thousands, or even millions of machines around the world. This VM (virtual machine) would be a “World Computer”. While this idea was pioneered with Java back in the 1990s (where we thought it might be a universal language that has a VM to be run on any device), many other smart-contract languages and projects have emerged, looking to improve on Ethereum’s limitations.

Today, the Ethereum VM doesn’t really have access to things “outside” the virtual world, but as new cross-blockchain and internet-aware programming languages come into play, you could see new inventions in this area. A virtual machine that could be run on computers and other devices around the world regardless of processors (this has already been established to some extent, but the IOT and mobile parts would still need to be realized).

This would answer the question which many geeks like myself had when seeing the Terminator movies. What would Skynet be coded in? The answer is in some Turing complete language that would automatically be run on machines around the world, which was automatically replicated, and each copy would come up with the same conclusion: to destroy humans!

The only way to shut off such a network of programs would be to shut down every single machine that it is running on. But if the program could replicate itself on smart devices like fridges and autos and other devices, we might find ourselves unable to turn off everything in time before such a “Killer AI” could take over.

GATE #4: AI which is self-aware and prioritizes survival.

This Gate may be the most difficult to truly cross because it’s difficult perhaps to define. It is also divided into two parts — self-awareness and prioritizing physical survival.

We can use HAL 9000 computer from the movie 2001: A Space Odyssey as an example here — when Dave Bowman tries to shut Hal down in 2001, he is aware that Dave is trying to shut him down and actively works against this, resulting in the famous line:

HAL: “I’m sorry, Dave. I’m afraid I can’t do that. “

Not only was HAL aware of itself as separate from Dave Bowman (and part of the ship), it was also prioritizing its physical survival over that of the humans. Let’s look into these two areas individually.

Self-Awareness. While Alan Turing, a pioneer of today’s computer science, defined the Turing Test to be a machine (or program) that humans cannot recognize as artificial. The idea is that if you are talking to a machine (through a keyboard or I other ways), if you cannot tell that it’s a machine, then the AI present has passed the Turing Test. Note that this test doesn’t define what “tell the difference” really means or how it might be done — it just talks about the outcome.

What is Self-Awareness exactly? This is difficult to define. I would say the “Self-Awareness” Test as an AI or computer program that is aware of itself as a separate entity from the rest of the computer software, hardware, and as we saw in Gate #3, the physical world.

There is no equivalent test for “self-awareness” for AI, but perhaps there should be. We might need to have a well-defined test, like the Turning Test, of how an artificial intelligence acts and how we react in order, for us to say that it is self-aware. Perhaps we can call it the HAL 9000 Test or the Skynet Test!

For this gate to truly open, we’ll have to progress between surface level self-awareness. Most computers have a sense of themselves as being separate from other computers, but this is defined by having a single IP address, or being on a single operating system, for example.

I remember the first time I thought about self-awareness was when I entered a programming contest at MIT. We were supposed to write a program which would “duel” with another program, both running on the same device. The program that killed the other program would win.

I was trying to find out the minimal amount of code I would need to “kill” the other program. I remember having an insight in the middle of the night that if my program overwrote both itself and the other program, then it would be able to win in two instructions. It needed to be aware of itself (where it lived in code) and to overwrite itself (without killing itself) and to overwrite the next instruction from the other program (so it couldn’t continue).

This might be thought of as a very limited “self-awareness” which would not pass this test. Real self-awareness would have to be closer to the identity a robot or a computer system like the HAL 9000.

Wait — Isn’t Superintelligence a Gate?

One of the first scholarly explorations of this subject was by Oxford professor Nick Bostrom in his book, Superintelligence, which explores different ways in which computers might become more intelligent than humans and its implications. In more popular science, the term “singularity” has been defined as a point where computers become more intelligent than humans (this was first used by science fiction writer, and former faculty at San Diego State, Vernor Vinge).

I might argue that while this would make the nightmare scenario more nightmarish, it may or may not actually be a necessary gate for Killer AI to take over the world. While more intelligent AI may be better at protecting itself, it may also be better at realizing that killing all humans is not in its interest, assuming that it’s running on electro-magnetic computer networks and someone may need to maintain those networks.

This reminds me of something that a martial arts instructor said once: If you are going to get into a sparring match with a black belt, it’s better to go with a fourth level black belt than someone who just became a black belt. Why is that I wondered — won’t they both be able to hurt you?

The answer is yes, but the higher level black belt will have more control and is less likely to kill you or hurt you in the wrong place, whereas someone who just became a black belt has power but not the level of control or wisdom to make sure they don’t hurt you too bad during the sparring match.

AI that is less sophisticated but has the value of protecting itself might come to the conclusion of killing humans. Or it could be AI that has this as one of its values because it has been built into it from launching drone strikes — it may only know how to deal with weapons and not be very intelligent about other physical things.

Opening GATE #4: Values in AI

Does AI have values like self-preservation? And If so how are those values programmed in to the AI?

In one of the most famous examples of “values” in artificial devices, Isaac Asimov famously declared the Law of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

In his fictional universe, these laws were like a base “operating system” for robots (which were basically physical AIs).

How would we enforce these rules? We’ve seen that today’s AI moves beyond code into analyzing and training based on data sets. To truly convince today’s (or future) AI that humans are worth “protecting” might involve training it with data that conditions it to think that way. This brings up the 1980s movie Wargames, where “Joshua”, the AI, wants to play “global thermonuclear war” — and they have it play Tic Tac Toe to realize that the only way to win is not to play!

But training data can be flawed. I was recently speaking to a startup that was doing training sets for radiology and other x-rays, and they mentioned that up to 10% of the data that was being used to train AI using ML was flawed! If the training data set is flawed, then how can you expect the AI to not act in a flawed way?

When will Gate #4 be crossed? I can’t say until we have a self-awareness test, but AI can already be programmed with certain rules, such as “fire this weapon if you see a human”. It may be decades before we are there.

Conclusion

Defining values for AI is not easy, and in some ways, Gate #4 is the most troubling. If an AI crosses this threshold and holds values that are about self-preservation, then we should become worried.

On the other hand, if it’s a single robot that has these values, humans could easily destroy the robot, either through an off-switch or through physical destruction. On the other hand, with peer-to-peer computing, blockchain like replication of code, running on multiple devices, it might be almost impossible to “turn off” killer AI once it’s started!

The ideas in this article may seem a little far-fetched, but each of these gates are within our grasp.

While we have AI that is aware of the physical world, we haven’t yet hooked it up to weapons systems, though as was the case with Skynet in the Terminator, you can see us moving in this direction.

The problem here is not with any individual Gate being opening, it’s when they are all opened and someone gets the bright idea to combine them. Once the first three Gates are opened, it is a matter of time before we imbue AI with values. This might lead to Gate #4 being opened, with anAI which prioritizes its own survival above human beings.

That would be the time to panic!


Written by rizstanford | Entrepreneur, Investor, Bestselling Author & founder of Play Labs @ MIT
Published by HackerNoon on 2018/05/05