I am an author, futurist, systems architect, public speaker and pro blogger.
As I said in my article What Will Bitcoin Look Like in Twenty Years:
Prediction is a tricky business.
You have to step outside of your own limitations, your own beliefs, your own flawed and fragmented angle on the world and see it from a thousand different perspectives. You have to see giant abstract patterns and filter through human nature, politics, technology, social dynamics, trends, statistics and probability.
(We have to transcend the relativity of our own perspectives to see the future)
It’s so mind-numbingly complex that our tiny little simian brains stand very little chance of getting it right. Even predicting the future five or ten years out is amazingly complicated.
So what am I going to do?
I’m going to try to predict 50 and 500 years from now!
Yes, I realize this is utterly insane.
It’s like climbing Mount Everest, with no shoes, no jacket, no Sherpa, and no oxygen after having barely climbed a small hill!
Of course, I’m going to do it anyway.
When someone asked George Mallory why he climbed Mt Everest, he said “because it was there.” Like many famous quotes, he probably never really said it but who cares? The quote was so good we had to invent it anyway!
So let’s go for it!
Let’s dive in and take a look at how AI will change society in the next few years, and by the time you’re old and grey, and when you’re long since turned to dust.
AI is already radically changing the world. It’s not coming. It’s here.
When you talk to your phone and it understands you want directions to Thai food, that’s AI. AI is already driving cars, beating people at ancient board games like Go, pummeling people in complex tournament video games like DOTA 2, and detecting diabetes.
Machines hunts down terrorist’s money and the Pentagon just dropped its first comprehensive AI strategy while they build a Joint Artificial Intelligence Center to automated war.
Taylor Swift used facial recognition to track predators at concerts. She’s not the only one. Facial recognition is rolling out to airports everywhere all over the world and it’s coming to a street corner near you.
Hate it or love it, AI is everywhere.
And it’s just getting started.
Right now AI is a mustard seed.
But from those seeds will grow a wild forest that ripples through every aspect of life from top to bottom.
In the short term the promise and peril of AI is legion.
AI will deliver some of our brightest fantasies and our darkest nightmares.
(AI is a Universal Turing Machine squared)
Because AI is a universal technology. It’s flexible enough to do whatever we want it to do.
And that means it will reflect the good and evil of its creators:
For all the worry about Terminators and machines taking over, AI is not even close to sentient. It doesn’t have its own desires. It’s not taking over the world any time soon, although we’ll explore that possibility when we gaze 500 years into the future.
What we have now is narrow AI. It’s limited to single tasks and it absolutely does whatever we tell it to do. It’s under human control. If it does bad things we have only ourselves to blame.
Where we’re going in the next five to fifteen years is a future dominated by that narrow artificial intelligence.
AI and how it’s used will reflect and magnify our nature a thousandfold, both the dark and the light.
It will live in your cameras, your phone, your computer, your glasses and places public and private.
It will power marketing, manufacturing, materials science and both preventative and palliative health care. It will level up surveillance and weapons technology. It will change the way we work, how we work with each other and how the world works.
It will become our friend, our co-worker, and our enemy.
As Francois Chollet said, “AI will become our interface to the world.”
In short, there will be nothing outside of the reach of AI.
Welcome to the dawn of the age of intelligence.
Come with me now and I’ll show you how it all begins and how it leads us into an intelligence explosion that will make the great shifts of humans past, hunter-gather to farming, farming to industrial revolution, and industrial revolution to the information age, look trivial by comparison.
The most hopeful area of AI for good lays in the realm of health care.
Elizabeth Holmes might have fooled the world with her grand fraud, but she was right that we are at the beginning of a radical shift from reactive health care to preventative health care.
(Even if she set it back a few years by scaring off investors.)
(Detecting skin cancer with visual pattern recognition)
We’ll point our smart phones at a worrying spot on our arm and the AI that lives in our phone will tell us to call the doctor. When we send the report it generated to the doctor’s office, the triage nurse will know to bump us to the head of the schedule if its skin cancer or a festering wound, rather than treating every patient as equal from hysterical hypochondriacs to the old lady who just likes talking to doctors.
A million dollar prize on Kaggle in 2017 already produced some breakthroughs that are having real world effects on detecting lung cancer, a disease with a notorious false positive and false negative rate.
False positives and negatives mean patients get the wrong care or get care too late or worse don’t get care at all. Big improvements mean cheaper health care and people living a lot longer because they get the care they need when they need it.
The biggest hurdle to AI in health care is the overly conservative and overly regulated nature of the system in almost every country in the world whether that’s the cold blooded big money system of the US or the sweeping socialized systems of Europe.
It’s hard to get new devices approved. The barrier to entry is massive. Small boards of people we don’t know act as a choke point, both protecting people from hucksters and slowing down innovation with their double edged sword.
Expect those walls to start to come down as AI shows increasing promise.
Laws will change to make it easier for AI based health devices to get into hospitals and into our phones and augmented reality glasses.
As the walls come down it will open up the possibilities for preventative care at a more personal level. Your watch and your glasses will know that your frequent napping during the day isn’t just overwork, it’s sleep apnea.
That will streamline the convoluted process we have now where you make an appointment to get a sleep study, then go to another appointment to get trained for the sleep study, then take the machine home and do the sleep study, then take it back to the office to get your results and then see a doctor in yet another appointment to prescribe the actual CPAP machine that alleviates the problem.
Archaic and convoluted processes like this will start to fall to the power of AI.
And all around the hospital devices will get smarter, from blood pressure machines to cameras. Cameras will detect if old folks stop breathing, have a heart attack, or take a nasty spill.
All of our devices will start to work as preventative care devices.
Your watch will increasingly become your heart and health monitor. In ten to fifteen years it may even be able to predict a heart attack coming before you go face down in your eggs and bacon.
“ALERT: Please go to an emergency room immediately as you’re in danger of a heart attack.”
That will mean you get to the hospital in time to live a lot longer.
Despite IBM Watson’s recent failure in the health care market, AI will work closely with doctors to study symptoms and diagnose disease. Even the best doctors can easily confuse the symptoms of something sinister with a bad cold or the flu. AI will draw from a broader baseline of knowledge and it will spot rare diseases quicker than any human mind. It will make doctor’s intuition all the more refined because it will take the load of deep pattern matching off their plate.
And of course, humans can’t fix spiraling and out of control healthcare costs.
Maybe the machines can?
The next best case for AI in the coming years is as co-worker not as a replacement for us. Whether its blue collar tasks or white collar tasks, AI will change the way everyone does business.
AIs that works side by side with humans to make us better, smarter and faster already have a nickname:
They’re named after Gary Kasparov’s early experiments with chess tournaments, where an AI and human teams bested pure AI and unaugmented humans. The tournament’s name came from the mythical beast of Greek legend that’s half horse and half man, symbolizing how man and machine can work together.
Centaurs are already hard at work and creeping into my devices.
My Gmail suggests ways to complete sentences and it works surprisingly smoothly. It’s not always right but it often saves me time during the day by doing the grunt work of business email, finishing sentences like “It was nice to meet you the other day” or “let’s find a time that works for everyone.”
We tend to think of AI as C3P0, a mimic of human intelligence but that’s not what we have now. We have specialized intelligence not generalized intelligence.
In other words it can’t do all the work for us.
(Again we’ll explore if that changes in the article looking 500 years out.)
Because it’s not generalized, AI will get deployed alongside humans to help them do their work much better.
Take a call center. Everyone’s had a terrible experience calling customer service. It’s often a low paying job with a high turn over rate. If you’re lucky enough to find someone who can think clearly and actually solve your problem on the other end of the line, it’s pretty much guaranteed that person will get promoted to manager or move on to another job more suited to their skills before you ever reach them again.
But what if we could model the best and smartest people in a call center and turn that wisdom into a digital dashboard?
Now we have an average customer service rep working closely with an AI that suggests stronger solutions to a problem, taking some of the critical thinking off their plate. That raises the intelligence of the entire team. They won’t reach the sublime level of that rare customer service rep who dances through red tape and convoluted back office systems with ease, but it will make the average customer rep a lot better.
In factories, we’ll see more and more AIs in charge of safety, monitoring equipment and predicting breakdowns long before engineers have finished arguing about who’s wrong. That will mean less accidents and better productivity.
We’ll also see centaur systems helping us with city infrastructures. Drones and satellites will monitor crops for the spread of disease and soar over city streets hunting for pot holes and filling them in. That knowledge will better inform public policy. Right now, which roads get fixed has absolutely nothing to do with where the money is needed most. It’s usually the result of politicians horse trading or outright guessing.
Armed with knowledge, the general public can demand better roads and bridges because they’ll actually know which ones are broken and engineers will know what roads to schedule for repaving next.
Centaurs will be everywhere in the next five years to fifteen years, spreading slowly and then quickly.
They’ll live in our phones and in our ears. They’re already correcting your crappy photography, changing lighting and steadying pictures. Tomorrow they’ll help you learn a language faster, with devices like Lily, the happy little language AI, that talks to you conversationally and understands you didn’t get that tone right when you tried to pick up Mandarin for that trip to see the Great Wall in China.
The earbuds in your favorite wireless headphones will double as a rapid translator when you touch down in Korea and realize no cab drivers speak a freaking word of your native language. Your phone will translate your words back to the cab driver and in no time you’ll be on your way in an extraordinarily overpriced ride from the airport!
Of course, every light has a shadow.
And nothing will make that shadow more clear than the dawn of an AI arms race.
The next five to fifteen years will mark the beginning of a new digital Darwinism.
It’s survival of the fittest at the military and economic level, the nation state level and the business level.
I recently watched the European Unions’ AI Night in Paris and all the talk was on “collaborating” and “working together” and “privacy” and “human rights.”
It’s a noble effort to make AI “human centric.”
Unfortunately, in the short term, it just won’t work.
A human centric approach puts the EU at a massive disadvantage in these early days of AI’s rapid evolution versus countries that don’t give a shit about privacy and human rights.
Today’s narrow AI feeds on big data. The more data you can collect the smarter your AI gets, at least until it hits a point of diminishing returns after its chewed through petabytes of imagery and text and video.
If your citizens don’t have any right to privacy and the government can do whatever it wants, it makes it super easy to build massive data sets to train your AI.
That gives China a tremendous early advantage.
(Alibaba City Brain Dashboard)
While the EU is still debating AI ethics, holding hands and singing songs, China is already rolling out AI at a breathtaking pace. In Hangzhou, the government-backed, Alibaba-built “City Brain” monitors 50,000 cameras, directs traffic, detects car crashes in under 20 seconds and tracks criminals.
They won’t stop there.
They want to build city scale visual search engines, call fire trucks before the people even know their house is burning and they most assuredly have stealth purposes not listed in their marketing material, like tracking dissidents and general malcontents.
That’s the biggest problem with AI.
It’s not only universal, it’s dual use.
If we use it to track old people in hospitals when they go into cardiac arrest or stop breathing, it’s only a short hop to monitoring old folks for health insurance to penalize them for not working out enough or taking their medicine.
For governments the ethics implications are even worse. Every country will want to deploy AI at the City Brain scale but they’ll need to tread a fine line between privacy and the public good.
That line is fuzzy at best.
The road to Hell is paved with the motto “for the greater good.”
For every use case I can think of that’s a clear public win, I can think of two that hold a more sinister edge.
Yet, there are clear and obvious greater goods:
Hell, if AI can save us from the morning rush hour it might just be worth surveillance machines because nothing has wasted more hours of human life than sitting in traffic two hours a day to drive to an office.
Sill, a system that spots criminals and “bad guys” can easily spot dissidents and political enemies and anyone who stands up and speaks out against the system. Expect governments to use the now standard bogeyman of terrorism as a justification to sneak these technologies in under our nose without a public debate. They won’t even have to sneak them in because they’ll already be there directing the morning commute and calling ambulances so politicians with darker agendas can easily layer in a second purpose.
And as cities get smarter, it’s not hard to see non-lethal and lethal autonomous arrest technology. A drone will swoop down and capture the bad guy with a net or a glue spray or just kill him with good old fashioned bullets.
The only real question is, who gets to define the bad guy?
There are obvious bad guys like murderers. But it doesn’t take long to start seeing everyone on the other side of the political debate as bad guys too.
Who the bad guys are shifts with who’s in power.
If you have a just government, making just decisions, then the bad guys are people we tend to agree on. But an unjust government can mark anyone who disagrees with them as a criminal.
And that’s the biggest problem. The real bad guys, authoritarians, the vicious and the cruel will already have a dual use technology ready and waiting to serve them when they come to power. AI will live in our public cameras to get help to people faster but its latent power will be there waiting to be activated.
Today’s AI serves its masters faithfully. Unlike a soldier, it won’t ever question the order of its commanders. It will just do what its told with ruthless efficiency whether its right or wrong.
We fear AI with consciousness.
Maybe we should fear the exact opposite?
AI without consciousness.
There are less obvious things to watch out for in the next five to fifteen years.
To compete in the early days of data hungry AIs, many democracies will need to skirt the rules.
Like the EU, they’ll say all the right things about working together and making technology human centric but they’ll secretly train their AIs in less open areas of the world, much like the US during the War on Terror outsourcing “special interrogation” to countries where human rights didn’t matter. Or it will be like rich folks promising to pay all their taxes while secretly setting up offshore companies that shield their wealth from the prying eyes of the tax collectors.
Companies and countries will train their systems in the less open areas of the world and then they’ll take those trained systems back to the US and Europe.
They’ll have stuck to the letter of the law but violated the spirit.
This will make China the AI training center of the world. And they’ll leap frog being the cheap manufacturing center of the planet to a next-gen intelligence economy.
The good news is that all is not lost with hopeful messages like the ones from the EU.
I share their desire to see these technologies used for good. I just don’t share their optimism about human nature. This technology is too tempting and too powerful for the better angels of our nature to prevail across the board.
Even if militaries sign agreements not to make killing machines, they’ll do it anyway with black budgets.
But in the end it won’t matter. Technology itself will help mitigate the problem. Right now AI lives on big data. But someday they won’t and that means we won’t have to outsource training to places that don’t share our values.
What do I mean that AI will evolve beyond big data?
Human intelligence provides the template. We still have one big advantage over the machines.
If you take a kid out in the backyard and throw him a ball, he’ll probably get pretty good at it in a couple of weeks of practice. He doesn’t need to see 10 million images of someone throwing a ball to learn it. If he’s one of the rare athletes that goes pro later, he might watch lots of video and practice a lot more than the other kids, but he’ll never come close to watching 10 million images. And he won’t need to either.
That’s because humans have built-in learning algorithms that no data scientist has yet to discover.
We’re a mystery.
Eventually we’ll build AIs that mimic the “universal learning” of people and that means a hopeful chance to make the tech follow stronger values. Unfortunately, right now we don’t know how we do what we do.
We don’t even know where to start but we’ll figure it out, although almost definitely not in five years.
In 50 or 500?
We’ll take a closer look at modeling human intelligence in parts two and three.
But before we get to 50 and 500 years we need to talk about the darkest shadow of AI.
This will surprise some people, but I don’t know whether AI weapons are the worst thing to happen to society or the best? I’ve already explored the darker side of smart weapons so I’ll take a look at the other side of the coin here.
Today’s weapons are nasty, brutal things. In the second world war, bombs fell indiscriminately. They rarely struck their target and both sides carpet bombed cities and slaughtered civilians and military targets alike in a whirlwind of fire.
Computerization saw the rise of smart weapons. We took old missiles and added GPS and laser targeting and we got better at hitting the targets we wanted to hit.
Today’s push button remote warfare has remote piloted drones firing smart weapons with deadly precision but they still kill civilians and children with terrifying regularity.
The dirty little secret in warfare is that smart weapons aren’t all that smart.
They still veer off course, strike the wrong people and kill lots of people other than the person we wanted to kill. If a terrorist huddles up in a school full of children the bomb kills all the children.
AI will mark the beginning of truly smart weapons. Pair facial recognition technology will micro-missiles or mini-drones and the weapon can slither in a window and kill only the person we want to target.
I say this is a good thing as if there are any good wars or as if targeting killings are a good thing but the truth is I’m just being practical. Wars aren’t going away any time soon.
Militaries will build AI weapons. Period. Nothing will stop them, not treaties, not debate, not protests, not holding hands, not wishing it away, not hope and change.
AI weapons are coming.
But upon reflection it is interesting to think that in many ways lethal AI will make for less innocent people killed.
That’s a small silver lining in a dark cloud, I know.
But I’m always looking for hope in the midst of a storm and AI weapons are one hell of a perfect storm that is coming whether we like it or not.
The other nasty use of smart machines is super-sized surveillance.
My father is a conservative man and not prone to worrying about the future or technology. But in the early 1990’s he was uncharacteristically worried about a massive rollout of surveillance cameras in cities. I told him not the worry about it because they couldn’t hire enough people to watch all those cameras.
Twenty years later that counts as one of my worst predictions.
AI does the scaling for us. We don’t need to hire all those people because the camera can do the detection for us and alert a small team of humans.
This is one of the most terrifying and big brotherish uses of the technology that I can imagine in the short term.
It will only get worse.
But there is a real silver lining.
All of these dark uses of the tech will create a blow back.
Tomorrow’s children will get very comfortable with counter-AI measures.
Just as kids in the 1980’s learned to make phone calls for free on payphones and watch scrambled channels, tomorrow’s kids will know countless ways to game AIs and subvert them.
Want to get your resume looked at faster by a human? Beat the bot by stacking it with key phrases and positive sentiment power words.
Want to beat the facial recognition cameras on the way to the rave or protest? How about designer face paint and clothes designed to confuse the machine’s visual powers.
We’ll also see the rise of AI fighting back against AI. Maybe you won’t even need to know the best key phrases to get your resume through because a crowdsourced AI will work across a distributed darknet to hack them into purchasable databases that any kid can buy with a little Bitcoin or Monero?
AI will power malware too. If you think today’s computer viruses are annoying, they’ll only get worse as they learn to adapt in real time to brittle anti-virus countermeasures and heuristics.
Adversarial AI training will become a way of life, not just a theoretical way to crash a self-driving car.
No matter how smart these machines get, kids, hackers and militaries will become masterful at fooling machines.
It will be a matter of life and death in some areas and just another way to work the system for a free candy bar in others.
We’ve looked at AI detecting cancer, running entire cities with gigantic virtual brains and making smart weapons smarter.
Realistically, five years is a little too fast. Ten to fifteen years is more likely to see mass deployment of everything I talked about here. But “five, fifty and five hundreds years” had a nice ring to it, so don’t shoot me!
We tend to overestimate the speed of some things and underestimate it in other ways. There’s a great line in Black Hawk Down, where a soldier asks how long it will take to fix a helicopter?
“Nothing takes five minutes.”
But does it really matter? Five years or fifteen is a drop in the bucket for radical changes that will remake the world.
Change is coming and it’s coming fast and it will only accelerate.
For first time in history we have a technology that can work on itself in infinite regress. We already have AIs creating AIs and that’s going to get better and better. Building neural networks is a trial and error process at best so why not let the machines do it?
AI begetting new AI can only lead to one place:
A Cambrian explosion of intelligence.
And that brings us to the next fifty years.
In part two, we’ll zoom forward and see how AI changes the world in ways that will look like magic to every human generation that came before us.
Early links to every article, podcast and private talk. You read it and hear first before anyone else!A monthly virtual meet up and Q&A with me. Ask me anything and I’ll answer. I also share everything I’m working on and give you a behind the scenes look at my process.
Market calls from me and other pro technical analysis masters.The Coin’bassaders only private chat.The private Turtle Beach channel, where coders share various versions of the Crypto Turtle Trader strategy and other signals and trading software.Behind the scenes look at how I and other pros interpret the market.
A bit about me: I’m an author, engineer and serial entrepreneur. During the last two decades, I’ve covered a broad range of tech from Linux to virtualization and containers.
You can check out my latest novel, an epic Chinese sci-fi civil war saga where China throws off the chains of communism and becomes the world’s first direct democracy, running a highly advanced, artificially intelligent decentralized app platform with no leaders.