paint-brush
AI Anxietyby@terrycrowley
2,181 reads
2,181 reads

AI Anxiety

by Terry CrowleyOctober 26th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This quote is from a blog post titled <a href="https://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html" target="_blank">Transcending Complacency on Superintelligent Machines</a> published back in 2014. It is how a quartet of prestigious scientists framed our current attitudes to the risk that developments in artificial intelligence will give rise to artificial superintelligence (ASI). They suggest our response would not be so calm to the idea of aliens arriving — and neither should our response to the effectively alien superintelligence that AI will produce.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - AI Anxiety
Terry Crowley HackerNoon profile picture

If a superior alien civilization sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here — we’ll leave the lights on”?

This quote is from a blog post titled Transcending Complacency on Superintelligent Machines published back in 2014. It is how a quartet of prestigious scientists framed our current attitudes to the risk that developments in artificial intelligence will give rise to artificial superintelligence (ASI). They suggest our response would not be so calm to the idea of aliens arriving — and neither should our response to the effectively alien superintelligence that AI will produce.

I just finished a deep dive into the whole area by one of those scientists, Max Tegmark’s Life 3.0: Being Human in the Age of Artificial Intelligence.That book dragged me down a rabbit hole and into reading more widely about the current state of AI and prospects for artificial general intelligence (AGI) and artificial super intelligence (ASI). This has gone mainstream with articles moving from techy magazines liked Wired to more mainstream publications like the New Yorker and Vanity Fair. One of the most entertaining discussions is from the ever-creative and amusing Tim Urban of waitbutwhy.com. Urban is the rare writer who can make you laugh while discussing the possible destruction of all human life.

Tegmark is a physicist by background (and author of the utterly fascinating book Our Mathematical Universe which is what made me pick up his new book in the first place). He has spent much time since that blog post actually trying to do something about the risks of ASI and raising awareness both within the AI research community as well as the wider public. He played a key role in enlisting Elon Musk in the effort. Musk is such a visible public figure that he has generated much more publicity and awareness of the issue over the last couple years. A very comprehensive examination of the whole risk area can also be found in Nick Bostrom’s Superintelligence and James Barrat’s Our Final Invention.

A major goal of Tegmark’s is to start generating dialogue on some of the deep issues ASI raises as well as researching practical approaches for reducing the risks — the general field of AI Safety. Bostrom probably has the best line by framing the discussion as “Philosophy with a deadline”. Tegmark’s call for dialogue motivated me to write down a few thoughts and add to the conversation.

First a few points of terminology. Artificial Narrow Intelligence (ANI) is essentially all applications of AI that we have seen to date, from chess playing machines to Siri to help-desk chatbots to high-frequency trading algorithms. Artificial General Intelligence (AGI) is AI that is capable of reasoning with human-level intelligence to solve general problems. Artificial Superintelligence (ASI) is AI that has surpassed (perhaps greatly) the human level intelligence of AGI.

There is so much good information out there that I don’t want to try to be comprehensive in any way. I just wanted to get down some key ideas that have framed out how I am thinking about these issues.

Even ANI is likely to be tremendously disruptive. Imagine AI doing to white collar workers what tractors and other farm automation did to farm workers in the early twentieth century. Massive improvements in productivity — which is what drives improvements in quality of life — are inherently disruptive. How we handle that disruption will be a major societal issue over the next few decades. That said, the ASI discussion is too critical to be crowded out by these important near term issues.

Most computer scientists (including me) think AGI is inevitable. At its deepest level, this belief is grounded in what is the most amazing result to ever come out of computer science research — the concept of computability and a universal computing machine. Back in the 1930’s, very different paths of research led to strikingly equivalent results. Certain problems are simply not computable — you cannot write a program that can solve the problem. The inverse result is astounding — for problems that are computable, a very simple computing machine — a Universal Turing Machine — can solve every such computable problem. All computing devices that can simulate a Turing machine are equivalent. Whether the machines are implemented with transistors or with neurons, they are fundamentally equally powerful. This was an astoundingly unexpected result that I still find amazing 40 years after I learned it.

Whatever intelligence is — and there are lots of arguments and alternate definitions — it must be some form of computing and therefore can be duplicated in a substrate-independent way by a Turing-equivalent machine.

Any arguments that arise about AGI being impossible then need to be based on either a speed argument — we cannot compute fast enough to match the brain — or a lack of knowledge argument — we cannot match the type of computation — especially including learning — that happens in the brain because we do not and cannot understand it.

The speed argument disappears in the face of the relentless exponential increase in computing power. These speedups include new designs that merge the speed of electronic computing with parallel computing approaches inspired by organic models (neuromorphic computing). The lack of knowledge argument is essentially equivalent to Intelligent Design’s “God-of-the-gaps” argument. As computational neuroscience gains more understanding of the details of the way neurons are wired, it becomes more and more clear that it is “just computing” — circuits that take input and produce output according to specific algorithms encoded in the brain through either evolution or learning (or typically a combination of both). Each step in understanding reduces the size of those gaps and brings AGI closer. This is not to say that breakthroughs are not still required — it is just that those breakthroughs are virtually guaranteed to happen.

The plasticity of the brain — different parts of the brain can be repurposed — and the flexibility of the brain in the kinds of problems to which it can be applied argue that there is some general principle at work in learning and problem solving. The speed at which humans became much more intelligent than other apes also argues that there was a key breakthrough rather than a long improbable series of evolutionary steps. This leads many to believe that a key breakthrough in AGI and then ASI could happen very quickly and might come from even a small team of researchers. Some of the recent successes in self-directed learning such as Alpha Go Zero have this flavor.

If AGI is inevitable, so is ASI. Human-level intelligence is not some special plateau or peak on the scale of intelligence. As amazing as the human brain is — perhaps the most complex device in the known universe — it has a range of evolved biases and severe constraints. Whatever drove the explosive growth of the human brain — there are lots of reasonable theories — that growth was under severe constraints and tradeoffs related to pre-existing structures, metabolic cost, maternal health, and childhood duration. Those same constraints do not apply to machine intelligence. In fact some level of ASI will be contemporaneous with AGI since machines already surpass human intelligence along a number of important dimensions. As an aside, this is also where virtually all science fiction depictions of robots/androids/AI go off the rails. For some reason, android intelligence seems to plateau right around human intelligence. That is not going to happen.

From the earliest contemplations of machine intelligence, the possibility of an intelligence explosion has been well understood. This is the concept that one of the things a machine intelligence will be good at will be improving itself. It also will have strong motivations to do so. This is a positive feedback loop like the sudden squeal generated between a microphone and a speaker. As the AI gets smarter, it gets smarter about making itself smarter. There is still broad disagreement about the time scale this would happen on — from literally seconds to years — but the basic principle is straightforward. An AGI could become an ASI very quickly. The truth is, we do not really know how intelligence scales.

Whether you believe ASI could be a powerful agent for good (“Please Solve Global Warming”), it is likely to be very, very destabilizing. In fact there are strong arguments that the first successful ASI would work to prevent any others from being created and would become a “singleton” — the first and last successful ASI.

One topic that seems poorly covered around this area is what is the natural “fragmentation” of an ASI and what the ecosystem of AI will look like. This gets to this question of whether you would end up with a singleton or a multipolar ASI world. That would have significant implications with respect for competition between ASIs and would have implications for us in the cross-fire. For organic life, an ecosystem with very different levels of complexity from virus to humans has been stable for billions of years. It is not clear to me why that should be different in an AI-dominated world given latency constraints enforced by the speed of light and three dimensions. A relevant book on this topic is E.O. Wilson’s The Social Conquest of Earth. Every major jump in life’s complexity from bacteria to eukaryotes to multicellular life to human society has been about game theory and solving the Prisoner’s Dilemma problem at the next level of complexity. It seems the same issue would arise at this next level of intelligence.

Tegmark uses a definition of intelligence as “the ability to accomplish complex goals”. This is a particularly useful definition since it immediately raises the question of “whose goals”? As a superintelligent AI learns its goals, how does it decide to adopt those goals and why will it maintain those goals over time? What group of humans get to determine those goals? If some ants created humans, would we ultimately care what goals they originally had when they created us? How would an ASI’s goals align with ours? Feelings around fairness and morality are so deeply embedded as to be unconscious in the human mind. Is it possible to design an AI that has these just as deeply embedded? A superintelligence that lacked these characteristics would seem effectively psychopathic.

Independent of what the final goals of an ASI are, it is likely to have a set of consistent sub-goals, most of which are potentially dangerous by themselves. These include self-preservation (if it is destroyed, it cannot achieve its goals), self-improvement and resource acquisition. Any of these could potentially cause harm even if its ultimate goal is innocuous or beneficial. Such an ASI would not be “evil” — it would just be indifferent. The distinction might not make a difference for humanity. Such an ASI would not only be super intelligent, it would be super effective at using its intelligence to achieve its goals.

Simply asking everyone to stop working on AI seems unlikely to work. The expertise and capability are too wide-spread and potential value is too high. This is true for AGI/ASI as well as approaches to narrow AI that start blending into more general strategies. That makes the demarcation unclear, all the while intermediate levels of intelligence will still have high utility. This is quite different from nuclear weapons technology that required state-sponsored-level support to achieve and had very limited potential positive non-military uses. Basically we have already handed everyone 100 pounds of plutonium.

I was surprised by how mainstream the idea was that if we were to encounter an alien intelligence, it would likely be artificial. I have thought this true for some time but thought I had some unique insight. It turns out to be a well-accepted idea. It took us less than a century of electronic computing to get to this stage and be on the cusp of ASI — why would it be different for any other civilization? Any civilization would likely see a huge qualitative gap between their evolved biological intelligence and their constructed artificial intelligence.

Both Bostrom and Tegmark discuss the idea of our “cosmic endowment”. This brings together a wide set of ideas along a shaky series of premises that are worth examining piecemeal. It starts with the premise that intelligent life is likely rare in the universe. If “the Drake equation” or “the Fermi paradox” do not roll off your tongue, you are unlikely to be familiar with this line of reasoning. I had read Rare Earth a while back and added Alone in the Universe after reading Life 3.0. The basic premise is that if you start looking at how implausible technological civilization was here (it took us 4.5 billion years after all, dodging asteroids, ice ages and other cataclysms all the while) and start multiplying out probabilities for all the even more severe challenges in other places and times in the galaxy or universe, you start wondering whether there is anyone out there after all. If we really are that extremely rare (singleton?) occurrence of intelligent, technological life, what is our role in preserving and extending it over the space and lifetime of the universe? Should that be a goal? A responsibility? While I have always felt it exceedingly unlikely that we would either generate the motivation or capability to execute on interstellar human travel, sending out automated, self-replicating super intelligent von Neuman probes seems much more plausible. For a farsighted super intelligence, basic sub-goals of self-preservation as well as resource acquisition would argue for this step. A key question for humanity is whether such an automated self-replicating probe would be perceived as our direct descendants taking possession of that “cosmic endowment” or would be a soulless cancer unleashed on the universe. This is where “philosophy with a deadline” comes home to roost.

This is also where the question of artificial consciousness as well as qualia — subject internal experiences — enters the picture. Unleashing a self-replicating worm whose only goal is replication does seem like a cancer. If that artificial intelligence has independent qualitative experiences and independent goals around science, beauty, or even “happiness”, then multiplying the sheer number of such experiences in the universe seems like a good thing, whether directly descended from our DNA lineage or not. However, it might not feel like such a good thing if we get squashed in the process. That is taking altruism pretty far.

A faster strategy for replication if there is indeed other intelligence in the universe is to send out instructions at the speed of light for constructing that self-replicating probe. This argues that if we encounter the scenario from the book/movie Contact, we probably should not build that machine. It is unlikely to go well.

AI Safety is the area of research trying to figure out how to keep this all from ending badly. Tegmark would like it treated like software security where there are experts but virtually every practitioner needs to treat it as a major pillar of design. The current practice is closer to web browser development circa 1995. I can briefly summarize the state of research as “some good ideas, none of which are likely to work”. In some perfect world we would discover that goodness and morality are the natural consequence of increasing intelligence, but in practice goals or motivation and intelligence seem orthogonal. Perhaps we are just not smart enough. This is all very high stakes and worth keeping a close eye on going forward. Progress is being driven by strong economic forces and appears quite fast.