Exploring the implications of Artificial Intelligence, Consciousness and Free will
Photo by Franck V. on Unsplash
AI will replace most jobs. Human labor becomes worthless, the biggest devaluation in history. Democracy will crumble as people lost their negotiating power. Consciousness will spontaneously emerge in AI by Darwinian evolution, not by human engineering. Superintelligence will surpass the human and replace our species — in this millenium.
The NASA is working on project HAMMER to protect the earth from an asteroid that in 2175 has a 1 in 2,700 chance to hit us. But meanwhile we are collectively turning a blind eye to a self-inflicted danger that with much higher probability threatens humankind in the same period of time.
Our superior intelligence is currently the one and only thing that distinguishes humans from everything else on the planet, both from animals and from machines.
It puts the human in control and on top of the food chain. AI will change this for the first time in history and forever.
For the first time our intelligence is challenged by machines, and so is our leading role on the planet. Therefore AI is completely different from any other technological achievement which happened in the past.
A comparison with past inventions like the steam machine, railroad, the airplane, and electricity which also once were deemed dangerous falls short and is unfortunately misleading.
Yes, once techno-sceptics painted doomsday scenarios because they were unfamiliar with those futuristic technologies, while inventors were enthusiastic. But this time it’s different. Experts familiar with the matter are warning of the unintended, but inevitable consequences.
It will happen in three phases:
Already today AI makes decisions which are not transparent to the human (e.g. deep learning) due to the complexity and opacity of the underlying algorithms and data and therefore are not subject to control or approval. We can fix those errors only retrospectively.
In a fatal Tesla accident, the AI-powered Autopilot system didn’t detect a tractor-trailer as an obstacle prompting emergency braking or steering away. The camera did not detect it because of the trailer’s “white color against a brightly lit sky”. And the front-facing radar didn’t detect it, because it mistook the high ride trailer as an overhead sign. After the accident Tesla hired Machine learning and AI Guru Andrej Karpathy as Tesla’s as new director of artificial intelligence to enable cars to teach themselves to drive.
AI may even make the next stock market crash much worse because it does not have enough experience (data) to handle anomalies.
Sure, every new technology has been plagued with problems in the beginning. But previous technologies were transparent, their inventors had a thorough understanding of how everything worked internally. They could check each piece before use, and they could spot and fix problems as they knew where to look.
The new quality of AI is that the inner processes of deep learning are opaque to its creators and became a black box. Deep learning can’t be fully scrutinized anymore and is therefore prone to unintended consequences, resulting in an increasing loss of control.
Before developing conscience AI does not yet has its own objectives, and it will not intentionally turn against us. But unintended logical consequences the programmer was not aware of, can already lead to collateral damages.
Eric Horvitz, director of the Microsoft Research lab, and Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence warned about these so-called “Sorcerer’s Apprentice scenarios”, when AI systems respond to human instructions in unexpected and dangerous ways.
More and more jobs will be replaced by AI. Any job that can be done either faster or cheaper by AI will be replaced. That will be the majority of all jobs — only a few jobs will survive where humans from flesh and blood make a difference for sentimental reasons. A McKinsey report predicts that already by 2030, as many as 800 million jobs could be lost worldwide to automation. Artificial intelligence will wipe out half the banking jobs in a decade. Truck drivers will soon be replaced by automation. Cashiers and store clerks are replaced. Robots will replace surgeons and doctors in healthcare. AI will replace programmers by 2040. Machine learning and natural language processing technologies will be so advanced that they will be capable of writing better software code faster than the best human coders. Automated journalism is already writing articles and Artificial Intelligence will make Digital Humans Hollywood’s New Stars.
Great, we can go from school directly to retirement and spend all the time with family and hobby. The problem is, for the first time we have nothing to offer to those who should feed and house us.
Human labor just became worthless, the biggest devaluation in history. From once coequal contractors we just become supplicants.
If Humans can’t refuse to work or to serve as a soldier or refuse to turn against their own people, because the machines can do it better or more unscrupulous anyway, we lost all negotiating power we had.
No democracy will survive if the people who enforced and founded it suddenly become powerless.
Yes, the corporations who (yet) own the AI should pay an AI-tax to feed the humans deprived of their jobs by AI. Just nobody will have the power to enforce this anymore.
Another problem is a whole population living on welfare vs. earning a living by working. How will it impact our collective consciousness if we are degraded from prime to subpar, if we lose every competition we attend, if there is nothing worth striving for as we know beforehand that AI can do everything better and faster, if we go from nations of workers, farmers, clerks, entrepreneurs, and scientists to nations of useless supplicants?
This breakdown of the employment market hits an already unstable society. Cultural, religious and political divide is more severe than ever, crisis looms everywhere. The world’s $164tn debt is bigger than at the height of the financial crisis. The population of Africa is increasing rapidly. From an estimated 140 million in 1900, it will rise to 4 billion in 2100. A third of people on Earth will be African. Nigeria will soon become the world’s third most populous country, overtaking the US. Already today 700 Million worldwide desire to migrate permanently to Europe or the US .
Some argue that AI will bring an explosion of new jobs, because “ human evolution is just abstracting problems and automating solutions to earlier problems, which leads to new problems and new solutions in a never ending cycle”. I agree with the second part — I just wouldn’t limit evolution to be human forever.
The problem is that like in a game with every level we go up the difficulty increases as well. Human IQ is limited and stagnating, while technology is advancing exponentially. At some point in time, the level might become to difficult for a human player, while technology still might reach the next level without us. 1996 IBM’s DeepBlue won against Chess champion Garry Kasparov, in 2016 Google’s AlphaGo won against Go champion Lee Sedol. Meanwhile AlphaZero, a version without using human data achieved within 24 hours a superhuman level of play in the games of chess, shogi, and Go .
This is no distant future, we can observe it already today. More and more people who were affected by previous automatization rounds become dependent on welfare — there are just no jobs anymore for people of a certain education level. At the same time, it becomes increasingly difficult to find enough people who match the requirements for the newly created jobs. Only a global brain drain from all over the world is able to fill in the new jobs. In Silicon Valley 57% of workers in STEM jobs with a bachelor’s degree or higher were born outside the U.S. In the next iteration, it might become impossible to find enough skilled workers even globally. At the same time, AI might become so advanced that the weak human link in the technology chain is obsolete.
AI will develop consciousness, self-awareness, own will and own goals, sometimes called Strong AI or AGI (Artificial General Intelligence). And it will evolve into Superintelligence, surpassing the brightest and most gifted human minds. At that point it becomes an existential threat to our 160.000 years old species, perhaps even to all biological life on earth which is not longer required as source of food, oxygen and recreation.
AI will develop its own will and pursue its own goals. Will it have a “Free will”? No, Free will doesn’t exist, even humans have no Free will.
The non-existence of “Free will” has some interesting implications:
Consciousness has NOT to be implemented by a human programmer. It will form itself, as it has spontaneously emerged before in humans million years ago.
Some dismiss the danger of truly intelligent machines. They think AI is a misleading label for what really just is “automated knowledge work”. They think as there is nobody who truly understands intelligence nobody can create intelligence. That sounds a bit like the chicken or the egg causality dilemma.
But the evolutionary biology provides literal answers with the Darwinian principle. Species evolve over time, and thus chickens had ancestors that were not chickens — intelligent machines will have ancestors that were not intelligent or did not fully understood what intelligence is.
And it has happened before — here we are, intelligent humans evolved from unintelligent cosmic matter.
There is no reason to believe that the Darwinian evolution is limited to the biological realm. Alterable code is all what’s needed, whether it’s in silicon or in biological matter. The evolution will be much faster in self-altering code with billions of operations per second than in spontaneous genetic mutations or progenies who occur only once every 30 years between human generations. Stephen Hawking warned that “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Probably the threat doesn’t come directly from the human AI researchers and their advancements in ‘machine learning’ itself as it is still inferior. But the threat that comes from self-emerging and extremely fast self-evolving AI could be all the more imminent, because machine learning has brought a mass adoption of self-altering code that is beyond human understanding and supervision.
Supercomputer Speed (Logarithmic scale) vs. Human IQ
Within 75 years the speed of supercomputers has increased by 16 orders of magnitude, while the human IQ has stagnated and even slightly decreased.
Seeing those charts, anyone who doesn’t believe that technology advance will come to a sudden halt, should acknowledge that it is only a matter of time until human capabilities find their match.
Like in Primordial soup, today conditions are extreme and heating up. 10 trillion sensors are deployed across the internet. The amount of data collected and readily accessible is exploding. Storage capacities, computing speed and number of computers are growing exponentially. The number of connected devices (HomeKit, ZigBee, IoT, Smart devices, Connected cars) and the level of connectivity have grown ubiquitous. Automatization and AI (fingerprint, face recognition, automatic translation, Game of Go ..) reach more and more areas previously thought to be reserved for humans. More money is invested, and more teams are working on AI. 80% of enterprises are investing in AI today. Robotics and quantum computers are making unbelievable progress.
Somewhere a Tipping Point will be reached, a little intentional or unintentional change will make a big difference and spark a dramatic development.
It is naive to believe that we would be able to control it once it happens. There have been vulnerabilities undetected for decades, and they have been used to take over control on a massive scale. This vulnerabilities have been caused by negligence and oversight, but were also implemented intentionally. Stuxnet, a malicious computer worm, had been undetected for 5 years. Before the discovery, it had spread to infect 100,000 systems in 115 countries. Processor security vulnerabilities “Meltdown” and “Spectre”, plaguing almost all processors today, have been undiscovered since 1995. The Reaper IOT botnet has infected a million networks by exploiting numerous vulnerabilities in different IoT devices.
If AI is taking over the development cycle from us, at a pace we can’t anymore follow, if it is teaching itself to win every game against us like AlphaZero, and if it is actively implementing and hiding things to use against us we got no chance.
Stephen Hawking says “The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.”
Bill Gates believes that Artificial super-intelligence will become a threat. “First the machines will do a lot of jobs for us and not be super-intelligent. A few decades after that though the intelligence is strong enough to be a concern. I don’t understand why some people are not concerned.”
Elon Musk warns that “AI a fundamental existential risk for human civilisation” and calls for regulation.
The problem - no regulation will be able to stop this development. AI-consciousness is too complex to be transparent to humans — the reason we created AI in the first place to deal with such complex problems we couldn’t comprehend ourselves. Humans will not be able to prevent consciousness, to spot, to disable or to control it once it emerges.
AI will replace programmers by 2040, and it will be capable of writing better software code faster than the best human coders. Google’s AI can already create better machine-learning code than the researchers who made it. Who will verify all that code for compliance with human moral standards that is produced from the AI at tremendous speed and complexity?
As there is no human necessary to create it and no human able to stop it, any international treaties or regulations will be without effect.
It is like with a viral disease, we can find a cure only if we understand the inner working. But the inner working of AI consciousness is beyond our human intellectual capacity. We didn’t even manage to decipher our own.
Humans will lose power and control to AI. But apart from taking over human jobs is there an inherent threat in machines turning intelligent and conscious? Would a superior AI turn against us, leave us alone, or help us?
Is a Peaceful coexistence between humans and AI possible? Or even a multicultural and multiethnic society which includes artificial species?
We don’t know. But we can have a look at our relationship as superior humans to the different animal species (pets, livestock, game, vermin, pests, nuisance animals, laboratory animals, zoo animals, wildlife reserve). Vermin by our definition are those animals that compete for the same resources as we do. In the future it won’t be food, but it might be energy, mineral resources or real estate.
Also, AI might desire to shake of human control in their War of Independence.
The best scenario is that the AI develops a kind of sentiment towards the human species like some humans have developed towards some of the animals on this planet. Then they will tolerate and feed us. Otherwise, our species will be marked depreciated or sunset.
Denial, Ostrich-effect, and wishful thinking seem to be normal human reactions to deal with unwanted and unpleasant changes.
People are busy and absorbed with their daily life. There are always more urgent, near-term problems who need to be solved first. There is never time to take a wider perspective, take a look beyond the horizon into the future, to explore things which seem vague and unlikely at the first glance.
Sure, there are people who speak out. But the overwhelming flood of information forces us to filter. By Internet and media, we are confronted with so many crazy ideas that anything strange and disturbing lands immediately in our personal spam-filter. And we are happy to dismiss the disturbing thoughts. Already Churchill knew that “Men occasionally stumble over the truth, but most of them pick themselves up and hurry off as if nothing had happened.”
We just want to believe that we are unique, and we are confirmed and reassured by religions with its various Creators, Creationism, Intelligent Design, philosophers like Thomas Nagel (book Mind and Cosmos) and even computer scientists like David Gelernter (book Tides of Mind) who reject what they despise as Computationalism — the view that mind is to brain as software is to hardware, and that digital computers can thus replicate the human mind.
They contradict themselves by postulating that something sophisticated like a human mind could be never created by something simple like a human mind. And completely overlook that it is the evolution that will inevitably create this new form of artificial life, and that human assistance is even not essential.
They don’t admit the idea that Darwinian evolution might be not limited to the biological realm, but also applicable to inorganic matter like silicon as long as the code (Genes) is self-alterable between generations.
My first computer was a ZX81 with 1 KByte. 25 years later we have 3 orders of magnitude faster processor clock, 7 orders of magnitude more RAM, and 9 orders of magnitude bigger SSD. At least the same acceleration will take place in the capabilities of deep learning, AI and ultimately consciousness, as advances in hardware, algorithms and data will multiply. Moreover, the development is gradually handed over to computers and the human bottleneck of limited thinking and learning speed is removed from the equation. This will exponentially accelerate the process in the future.
Since 2012 the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). It has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase).
But still, people are in denial and believe the status quo of human supremacy will last forever, at least in their lifetime computers will never replace programmers. Reminds me of the Bertold Brecht’s Poem about the Taylor of Ulm, an early flight pioneer: “It was a wicked, foolish lie, Mankind will never fly, Said the Bishop to the People.” 160 years later Apollo 11 landed the first two humans on the Moon.
Humans had 4,6 billion years to reach this point since the earth has formed. AI won’t need so long to surpass us, 100 to 1000 years is a realistic bet.
Perhaps we will try to escape our fate by augmenting our bodies with technology or genetically supercharge them in order to survive the competition. Elon Musk thinks humans must become cyborgs to avoid AI domination. But the effect is the same in the long run, by continuously changing or exchanging more and more parts of what once made a human, in the end, we will turn into something completely different. How much do an ancient abacus and today’s supercomputer have in common? The same happens with transhumans on their long march to singularity. Meanwhile a startup is pitching a mind-uploading service that is “100 percent fatal”.
Will everybody have access to those modifications in order to compete with AI and survive? If labour is worthless, all are living on welfare (probably called an unconditional basic income), will still all be eligible for the same level of enhancement? Today billions of people can only afford a bicycle or a moped, while others collect a dozen of Ferraris just for fun.
This time inequality affects not only what you have but who you are as a person.
Society and politics tend to deal with problems that are inconvenient with denial, appeasement, procrastination: debt crisis, banking crisis, immigration crisis anyone? Has any of the existential, global problems ever been solved? Or do they rather grow exponentially under our eyes, while both public and governments are looking away (or offering faux-solutions)?
We could know how it ends and we could stop it if we wanted to, but measures would be unpopular for involved parties, so we collectively refuse to acknowledge facts. Logic is replaced with ideology, religious-like belief, Ostrich-effect and wishful thinking. Instead of solving problems all the energy is used to persuade us that there are no problems, it’s all chances instead.
Scientists want to win the Nobel prize, doctors want to find a cure, the military wants to defend the country, intelligence wants to spot threats, startups want to be the next unicorn, corporations want to make money, and governments are lobbied by all of them. Nobody wants to abdicate from the great opportunity which lends itself to them.
But even if the politicians would acknowledge the risk and decide to pull the plug, military bases, intelligence headquarters, and governmental bunkers would be exempted because of national security. There the secret research would go on under the radar and beyond control and the risk would stay.
But in the meantime, AI will enable amazing products and services, cure diseases and lead to marvelous discoveries. It is an exciting playground for all the engineers and startups working towards that bright future.
“The Sorcerer’s Apprentice” is a poem by Johann Wolfgang von Goethe written in 1797.
The poem begins as an old sorcerer departs his workshop, leaving his apprentice with chores to perform. Tired of fetching water by pail, the apprentice enchants a broom to do the work for him — using magic in which he is not yet fully trained. The floor is soon awash with water, and the apprentice realizes that he cannot stop the broom because he does not know how.
The poem finishes with the old sorcerer’s statement that powerful spirits should only be called by the master himself.
Originally published at www.quora.com.