AI has been with us since 70s or maybe some would say earlier than that. There were a lot of ideas on the field, but the progress was slow and the results far from what we had hoped. That changed about 10 years ago. Breakthroughs in computing and also the need to introduce “intelligent” products brought back the excitement for what AI is capable of.
So we have been introduced to things like Siri or Alexa , Smart home devices, internet services and so many more that all use some kind of AI to make the experience even better. Advanced or not, AI is already in many things in our lives and it is making it better. Until recently most of the algorithms that controlled the way a “bot” should behave were hard coded in the software that empowered it. In a way the behaviour was limited by the programmer’s skills but also by the human nature itself. We tried to create an emulation of ourselves.
A few years back machine learning and deep learning established as the next big step in AI evolution. At last, we are trying to free machines from the human limitations and give them the ability to learn all by themselves. We are at the start, kind of, but as the field tests have shown they can learn faster, better and most importantly never forget. Memory is maybe the greatest tool that helped humans go one step further through the years.
Now we have software that can learn to drive cars, fly planes, create art, even play video games better than us. Really big Neural networks are responsible for allowing all these machines to learn and become intelligent. Though there is a problem with that. Researchers have noticed that the deeper the network, the harder it is to understand how the machine made a particular decision. When thousands of emulated neurones are involved it is almost impossible to look into the system and understand how it works. This phenomenon is known as an AI black box and like human behaviour maybe we will never be able to fully understand it.
There's a big problem with AI: even its creators can't explain how it works_Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The…_www.technologyreview.com
Trying to predict what a system like that would be in 10 years from now, with all the advancements in technology, i can only say one thing. It will be a lot like a human soul. Take a step back and observe the similarities here. Human behaviour or a decision we make can be explained at an extent, but the true steps that led to that conclusion are really unknown to us. In an advanced system, decisions taken can be really hard for a human to understand, accept or put a reasoning behind them . The same thing can happen between two humans that come from other cultures or are having different interests.
Is this kind of frightening? Maybe it is. That’s why a lot of people in the field are starting to demand that AI should be regulated to avoid unpleasant surprises. Elon Musk recently said that AI is a bigger threat than North Korea and Russia’s president Vladimir Putin said that the nation that leads in AI will be the ruler of the world. The need to control how we use it is real and should be done as quickly as possible.
So the question is, are we on the verge of playing Gods? What is a living entity? If we create a robot with composite materials instead of flesh and give it the ability to sense and make decisions, based on things it learned, is it alive? What if after a while our machine will gain context-reasoning ? What if some day it will realise that wants more than what we told it to do? What if it will say “I want to be free”? Do we have the right to regulate that?