Update: For a deeper analysis see: First Principles: The Inevitable Rise of AI
The threat of AI to the human species has entered mainstream discussions lately due to recent advancements in machine learning algorithms. Elon Musk, Mark Zuckerberg, Bill Gates, and Sundar Pichai are just a few of the tech leaders that have added their viewpoints to the discussion.
So let’s set the record straight.
Of course AI is a threat to the human species.
This is not opinion, this is fact. The real question is:
How much of a threat will it be with respect to future time?
If you were to walk through the plains of India, there would be a certain percentage threat of you seeing, and being bitten by, a King Cobra. There may be many variables that influence this risk such as time of day, the path you are walking on, your gait, your eyesight, the season, etc. however based on some measurable variables a probability of being bitten could be assigned if enough data existed.
Now imagine you live in suberbs of Kansas City and are walking in your backyard. The likelihood of getting bit by a King Cobra here would be much smaller than if you were walking in one of it’s native habitats. In fact, it probably has never happened and probably never will. But there is always the theoretcial possibility that a zoo transport truck could crash down your block, the animals could escape, and a cobra could make it to your backyard.
The point is a threat still exists even if it is infinitesimally small. Therefore future AI is definitely a threat. The real question is how much of a threat?
But in order for threat to have meaning in the context of the future, we must discuss time.
Our species is 50,000 years old. And we are the dominant species on Earth. Most of us don’t worry too much about an ape army organizing and initiating a mulit-pronged strategic attack, and only microscopic living creatures could cause our species significant damage.
It follows that if AI ever becomes a threat to the human species at some point in the future, it will also threaten all other life on Earth.
Therefore AI is a threat to biological life, which is estimated to be over 4 billion years old.
It seems like whenever we start discussing the future of AI and whether or not it is an existential threat to humanity, we tend to speak in terms of tens of years, or at most a generation or two. 100 years is 0.2% of the time span of our species. 100 years is .0000025% of the time span of life on earth.
We’re not going to review history here, but we are moving farther and farther along what currently looks like an exponential curve of societal and technological growth. As long as this trend continues, the rate of human innovation will continue to grow, causing innovation to happen even faster. And the farther we get into the future, the more uncertain it becomes.
The future one minute from now is reasonably certain. The future 10 days from now, or even 3 months from now, is also reasonably certain (including the always present risks of major international conflicts). The future 20 years from now becomes much more cloudy. 100 years from now, even more so.
If we combine the increasing rate of technological innovation and increasing uncertainity as we move farther into the future, we can infer that the theoretical potential threat of AI also increases. Just as the plains of India are a more likley habitat for a King Cobra than a backyard in Kansas City, the distant future is a more likely habitat for powerful AI then the present or near future is.
It’s a fun game to try and predict whether AI will be a threat in 10 to 30 years from now. But where it really gets interesting is when we zoom out a little bit. Could AI be a threat to humanity in 200 or 2,000 or 10,000 years. Do you start to feel a little less confident while zoomed out? These are still small time frames relative to the history of our species, and almost negligble time frames relative to the full history of life on Earth.
You may be thinking “well sure, but problems hundreds or thousands of years from now are problems for the future, there is nothing we can do about those time frames now, we will all be long dead. All we can ever actually affect is the present, and a zoomed out perspective is just a pointless thought experiment.”
Fair enough. But when we ponder big topics such as the fate of humanity and whether AI will be more efficient than biological life, we are speaking about ideas that should be discussed in time frames relative to their exisitence, not the lifespans of a few generations of homo sapiens.
And when visionary leaders ponder hard questions and the response is “do you even understand AI, we are really far away from this doomsday you are speaking about,” my counter question is “what is far”?
Maybe I should just understand that when most of us speak about the future and fate of our species, we’re being dramatic and what we are really interested in is the lives of our children and grandchildren.
50 years ago computers were the size of entire rooms and 15 years ago deep neural nets were only used by maverick researchers. It took over 4 billion years for biology to get us to the present. The rate of growth of machine complexity in just 50 years has been fast enough that discussing them as a future unknown, and therefore a theoretical threat, appears both rational and wise from those truly interested in the future of our species.