paint-brush
Why You Should Not Worry About the Rise Of AIby@luc.claustres
967 reads
967 reads

Why You Should Not Worry About the Rise Of AI

by Luc ClaustresDecember 21st, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In a previous article I detailed why Artificial Intelligence is probably still far from being “conscious” (at least in human characteristics) I would like to explain why if this happens anyway in the near future AI will still be far from over-passing homo sapiens as a specie. Homo Sapiens were then better at creating tools and had no social net to save the weakest ones, so that only the cream of the crop survived. The key of our success is flexible cooperation at large scale using shared myths as a cement. Evolution always operates by small mutations of a present genome (and not giant random mutations)

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Why You Should Not Worry About the Rise Of AI
Luc Claustres HackerNoon profile picture

the developer philosopher

Do you fear SkyNet ?

In a previous article I detailed why Artificial Intelligence (AI) is probably still far from being “conscious” (at least in human characteristics). Now I would like to explain why if this happens anyway in the near future AI will still be far from over-passing homo sapiens as a specie. Once again we first need to understand an underlying “magic trick”.

What makes us special ?

Example of massive cooperation: Romanian demonstrators gathered in front of the headquarter of Romanian Communist Party in Bucharest during the 1989 anti-communist revolution

It seems to be an evidence now that during the last centuries homo sapiens has become the most powerful specie on Earth. We colonized the most remote parts of our world, explored the deep-seas of our oceans, reached the soil of the Moon, sent probes beyond the limits of the Solar system and have started engineering our own biology. Regarding individual intelligence only, it would be hard to notice significant differences between present humans and those living tens of thousands years ago. In various areas, Homo Sapiens were then better at creating tools and had no social net to save the weakest ones, so that only the cream of the crop survived. The key of our success is flexible cooperation at large scale using shared myths as a cement. You might also call it ideologies, culture or imagined orders, although in every era people do their best to hide this imagined character and argue it reflects an objective reality or a natural law.

One of the most important myths are imagined communities like nations, empires, religions, humanism or capitalism. But others important agents in history are inter-subjective realities, existing only in the imagination of people, like money. As far as we know, others animals are unable to cooperate at such an extent because they only cooperate based on personal acquaintance and their imagination is limited to objective realities like food. Human ability to give meaning to actions and thoughts is what has enabled our many achievements. Allow me to digress to mention a salient fact with respect to this theory. General Purpose Technologies (GPTs) are “ideas” or “innovations” having important impacts on many sectors of our lives, as were the steam engine or electricity. Information and Communications Technologies (ICTs) even work at a higher level because they actually improve ideas production and spread, so the pace of birth and death of human myths. That’s probably the reason why in our time people can experience dictatorship, endure communism then try capitalism within a single generation.

What makes AI different ?

I’m in awe to see more and more articles (like this or this) focusing on the near take over of machines although comparisons are always made at individual level and not specie-wide level. AIs are usually not designed to survive but instead to solve very specific and person-centred problems like playing chess. As such they can’t even adapt to slight changes in their environment without reprogramming them, while humans do manage imprecision or changing rules easily on themselves. As an example Rethink Robotics build more general purpose robots but they need to be trained by real people and are less efficient than their human operators counterpart. Their single advantage is that they don’t sleep or ask for an increase in salary.

Actually, building AI that exhibits adult-level performance on basic reasoning often requires less design effort and computation than baby-level sensorimotor skills like mobility, which are pretty much harder to tackle. A lesson learnt by AI researchers during the last decade is that the hard problems are easy and the easy problems are hard. Massive cooperation is probably one of these unanticipated hard problems. The collective and social intelligence accumulated in millions of years of evolution is by far our most important asset as a specie. Evolution always operates by small mutations of a present genome (and not giant random mutations) because if a being does exist now it has proven skills regarding the most common problems encountered in the past. Evolution also operates at a massive scale to explore many different ways of solving unknown problems (unlike AI).

The real question is: what would be the point of creating human-like AI ? Having and raising children is probably one of the most natural things a human can do and the world population already reached seven billion. We are facing major issues like global warming so do we really need to create more “humans” ? If not, do we simply want to feel like Gods, create beings and then switch them off at will ? Or maybe we just want to go back to the horrors of slavery ? Mostly for ethical reasons this will not happen, at least intentionally. So remain the superintelligence scenario, which is nothing more than science fiction around vague and unknown concepts. As an example, recursive self-improvement is highly improbable considering the halting problem. Will a superintelligent AI take the risk to commit suicide by self-improvement ? More generally, complex systems are fragile and have so many reasons to fail individually (local lack of resources, diseases, accidents, etc.).

What should we really fear about AI ?

Getty Images/iStockphoto

Technology is power, as such one of my real fears about AI is how people can use it to manipulate the world to their advantage. In addition to big data, farms of remote computers create secret AI-based models of who you are, gathering data from the network, most of the time freely and without you even notice it. This includes mortgage scores, financial trading, social networking, insurance, e-commerce, advertising, etc. The initial benefit is always the same: efficiency. However, it does not often balance the long-term degradation. For instance, you loved getting things for free or supercheap prices using your favorite cloud provider but then you noticed the factory you might have worked for closed up for good.

Multiple studies have documented that massive numbers of jobs are at risk in the future. Some people will simply be replaced by other people who master the digital working technologies, others could be replaced by AI. The disruptions could pose significant challenges to policy makers and business leaders, as well as workers. And the hypothesis that more jobs would be created than lost is at least dubious. First, the number of jobs that humans perform better than algorithms is decreasing. Second, if this happens too fast it is pointless. Indeed, low-skilled workers will not be suddenly changed into software engineers. In short AI threatens our intelligence not our consciousness. It might be so good at making decisions for us in the future that it would be madness to not follow their advice.

Conclusion

Résultat de recherche d'images pour "superintelligence"

© www.cryptocoinsnews.com

AIs differ so largely from us, so that the concept of machines taking over as a specie is not relevant at all. AIs are designed, structured and perform in ways having nothing in common with how humans evolved and do under natural constraints. They don’t share our goals and motivations, they don’t feel pain or fear, they don’t struggle for their survival or self-conservation. As a consequence they don’t need to collaborate and seek control of their environment as we do. Things would probably be different if we were creating true human-like AIs. However, present AIs mimic or exhibit human characteristics for the sole purpose of easing our communication with it. Of course researchers progress in fields like evolutionary algorithms or collaborative agents, but it is really far from being “ready for production”.

Homo sapiens will not be eradicated by a AI revolt soon. As usual, he is most likely to upgrade through a gradual historical process and merge his biology with technology on the way or something like that. However, he will have to take care that consciousness and intelligence will probably evolve independently, homo sapiens mastering consciousness and machines mastering intelligence. Thus, he will need to reinvent himself in a world where intelligence will not be his unique property.

This article would not exist without the following books:

Note: the views expressed in this article are those of the author and do not necessarily reflect the views of the cited references.

If you liked this article, hit the applause button below, share with your audience, follow me on Medium or read more for insights about: the next step of Artificial Intelligence, the evolution of Computer Science, the goodness of unlearning and why software development should help you in life ?