Future of AI & Humans — Winter is coming!

Written by sheshank-sridharan | Published 2017/08/01
Tech Story Tags: artificial-intelligence | future-of-ai | winter-is-coming | ai-wikipedia | ai-and-humans

TLDRvia the TL;DR App

My wife asked me an interesting question, “Do you think where we are going with AI is good?”. She continued to discuss ethics for robots based on a discussion she had wit her colleagues. After two solid days of thinking and reading, I decided to write my response :)

Here is how I have structured it:

  • How I define artificial intellegence
  • How will AI affect our economy
  • A philosophical inquisition of our futures

This is the wikipedia definition of AI:

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.[2]

As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of “artificial intelligence”, having become a routine technology.[3] Capabilities currently classified as AI include successfully understanding human speech,[4] competing at a high level in strategic game systems (such as chess and Go[5]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

I want to attack the definition in a more simplistic manner. Intelligence is the ability to acquire and apply knowledge and skills. So in the traditional sense of the word any human who is able to acquire knowledge/skills and apply it, is intelligent.

The first word in AI is artificial. Artificial is defined as something made or produced by human beings. Putting the two words together, a simple definition of Artificial intelligence is something of by humans which has the ability to acquire & apply knowledge/skills.

Something made by humans which has the ability to acquire & apply knowledge/skills.

What about the creation of knowledge? This is where the definition fails. The possibility that AI can create and therefore remove the human as its creator is something unfathomable. To make things worse if it creates is something we don’t understand, we won’t be able to deal with it. Honestly, this is far from reality in our current state of AI. I will come back to this in the philosophical section of my post.

What will AI do to our economies?

From time immemorial, man has been automating his tasks and making them more efficient using the aid of machines. At every point, there were inventions large and small that transformed our lives. Most of these inventions replaced some manpower. Automating tasks frees up humans to take up higher tasks. In the case of AI, we may have no higher tasks left as the very nature of this concept is to self-learn and improve. Every time a new innovation causes disruption in an industry, people lose jobs, money distribution within the economy gets skewed. The industrial revolution resulted in huge changes in the economy. There was productivity increase but a loss of jobs. But over the long run, the effects were positive and labor shifted to different jobs. The impacts of AI on the overall productivity will be positive but the distribution of wealth will get severely skewed with lots of money in the hands of very few. Lower end jobs will disappear leaving a lot of lower and middle-class people unemployed. For the educated and skilled working class, displacement has lesser impacts because of their ability to learn new skills.

We can see micro-patterns of technology replacing people in the software industry. A few decades ago, when enterprise applications flooded the market, there was a parallel market created for system administrators. The software services market rode the wave, they hired admins in large numbers and contracted them out. It was a great business. Production systems needed administration to scale and maintain business continuity. Millions made a living and continued to for decades in this model. Cloud technologies were being developed and perfected by players like Amazon in the background. Large service companies ignored it. Today, with DevOps and inexpensive cloud infrastructure, large enterprises no longer need admins. They are replacing traditional vendors like Oracle with cheaper open source alternatives. Who builds new products on Oracle DB when you can do it on Maria or Postgres? Set it up on AWS and it will scale with little effort. With large companies laying off admins, everybody wants to pretend like evil corporations are doing something crazy. It is simple, they hired people to do a job and that job no longer exists. Within the software industry, keeping up with trends is not a nice-to-have skill, it is a survival skill. Jobs will go and if we aren’t prepared, don’t complain.

Whenever any new disruptive technology enters the market, I see three phases:

  • The Skeptic phase — Every body knows it is in its early days so they view it with detachment. Skepticism is high.
  • The ‘Spotlight’ phase — In this phase, the benefits of the technology have been identified by a few big players. Everybody doesn’t understand the technology but that doesn’t stop them from talking about it. Marketing teams jump on the jargon and start adding it to their sites, white papers etc. But very few are actually working on it.

Dan Ariely’s Quote on Big Data a few years ago

  • The ‘join-the-bandwagon’ phase — In this phase, significant progress has been made by the early adopters and a select few mainstream players. The models of where the technology can be applied are getting clearer so people actually start looking for jobs in the area. Training institutes start advertising courses and MOOC players start building content. This is where the jobs start getting truncated and people begin to panic.

If you take any of key technologies in the recent past this has happened. Think Cloud, Big Data, Blockchain, Machine Learning, Deep Learning and now AI. AI is very close to being at the end of the spot-light phase.

AI is going to create a huge amount of jobs to build & conceptualize it in different areas. At the same time, it is going to displace large swathes of people from their jobs. These unemployed people won’t be able to get employed elsewhere easily. Depending on their age group, they won’t have the opportunities to upskill. A guy operating a stacker for 20 years of his life in a port won’t be able to learn AI programming. I read a paper from the white house that talks about policy making for the loss of jobs due to the advent of AI. It is an interesting start but even a developed nation like America will be unable to sustain this through just policy changes. The paper expects that drivers will be the first to lose jobs to driverless vehicles. They expect commercial trucking to be machine controlled. Even accountants & doctors are expected to lose jobs. I came across diabetic retinopathy diagnosis by AI and the results were good. While people are busy worrying about what Trump will do to the outsourcing industry, support staff is getting replaced by chat bots & virtual assistants. Nobody is going to get that job.

Take a look at this, it is advanced robotics. I don’t see why a deep learning program can iron out the things it can’t do now.

We are going to see a massive shift over the next 10 years and a cognitive enabled world powered by AI in 20 years. The good news is it will take 20 years, so until then there are jobs that will work on the replacement of humans by AI. In the example of driverless transport, I expect that the roads and infrastructure will need overhauling. This will create jobs. In the video clip above, masons will continue to assist the robot for a few years. The effect of these changes on economies in different geographies will be drastically different. In India, most middle-class homes are cleaned by maids who work for a low wage. The low wage is the only reason they have demand in a very cost conscious market. Since the population is very large, each maid works in multiple homes and makes a living. If someone were to introduce a low-cost high-efficiency robot to replace them, there would be social chaos. On the other hand, this invention in a country like the US wouldn’t have a dramatic effect because most people do their own household chores due to the costs. This is why some appliances like a dishwasher never went viral in a common household in India. The alternative is cheaper.

The philosophical angle

Before we ask ourselves what will happen if AI gets the better of man, we need to ask ourselves what is our purpose on this planet? If past data is an indicator of what we may do in the future, what does our past tell us about what we have achieved? I haven’t found a large positive impact that we have on the environment. If insects were to disappear there would be a large impact on fauna but if we disappeared, would there be an impact?

One of my fav scenes from The Matrix

We have progressed in comparison to what we were centuries ago. We have wiped out a lot to achieve that progress. We built civilizations, burnt them to the ground, wrote rules, broke them, waged wars, cut down forests, poisoned lakes and the list can go on. We are most likely the reason that several megafauna are extinct in continents that we migrated to.

Back to AI, our fear as humans is that AI will get the better of us and that we may no longer be the highest beings on this planet. This is rational but far fetched in its current state. Our solution to write an ethics framework to control this is irrational. With all our powers of reasoning and wisdom, we must understand that it is impossible to define a framework of rules with which we can quantify something as right or wrong. That is because unlike programming, ethics are subjective. While I think it is wrong to kill an animal and eat it, you might completely disagree. It is subjective and thus I am both right and wrong. Compare soldiers shooting soldiers on the other side. Both could be ordinary people sent to war. Their belief that they are doing the right thing helps them justify the killing. What about a terrorist? His/her belief system is so strong that it is able to justify killing innocent people. It is irrationality masked as rationality.

An ethics framework would have failed even before we wrote it. Unlike objective problems like solving a Rubik’s cube, it will be impossible to agree on a singular solution.

My wife brought up an interesting example her colleagues were debating. A driverless car is carrying a passenger. A truck is speeding towards the car from the back and the algorithm knows there is going to be an accident. The car has three options:

Option 1: Stay in the current lane, allow the truck to ram it. The passenger will die.

Option 2: Change lanes to the right, this will result in a fatal accident for the car on the right which is carrying a 65-year-old but the passenger in the driverless car will survive with minor injuries.

Option 3: Change lanes to the left, this will result in a fatal accident for the car on the left which is carrying a 10-year-old girl to school. Again, the passenger in the driverless car will survive with minor injuries.

How do you write the rules for this problem? (Would love to hear your take in the comments section)

The basic premise of this problem is that there is a right choice. In choosing that option, the driverless car would inflict the least possible damage in the larger scheme of things. Some data points the algorithm needs to make a human-friendly decision are:

  • What are the dependencies on the 65-year old? Does he have kids at home who are dependent on him? Is his wife employed? Is he an important person like a CEO or a community leader building a dam that can help 1000s of farmers?
  • Who are the girl’s parents? What are long term repercussions of her death? Is her father an important person?
  • Who is the passenger, is it important to save the life of the passenger at all?

From a pure data acquisition and crunching perspective, this is a complex problem. With multiple data inputs that need to be fetched on the fly and several considerations that need to be accounted for in a fraction of a second. A human being could not possibly take such a complex decision, assuming there is a right choice. AI would be a better decision maker.

Thought 1: If all the vehicles on the road were actually machine controlled, the network of vehicles communicating with each other would be so efficient that the situation itself would never occur. In other words, the driver of the truck was a machine so this would have never happened. Accidents will be eliminated 100% because most accidents occur due to an early warning sign which a human ignores. A self-aware system would never ignore a sign and through deep learning would actually even learn all the possibilities of failure beforehand. With each scenario of failure occurring independently or in combination, the system builds a handling for every scenario there is.

Thought 2: A human being defines the right choice in this scenario. In other words, the AI system is governed by an overarching set of rules defined by a human. By definition, this is flawed because our choices are subjective. My fear with thought 3 is that someone will write a rule to give preferential treatment to a race, religion, sex etc. The system is only as superior as the rules it is governed by.

This brings us to a more controversial question.

Should AI consider self-preservation?

If AI reaches such an advanced state that it surpasses humans as the controlling factor on the planet, then self-preservation is inevitable. Self-preservation leads to making choices that disregard other beings, like displacing animals in a forest so that to grow more crops or building an apartment on a lake bed to house more people. The assumption is that destruction of a forest is a smaller issue compared to starving. In our example of the accident, the car’s algorithm could look at self-preservation or preservation of machines first. The algorithm would first analyze which situation will result in the worst possible damage to the machines involved? Based their function in the machine ecosystem, it would then take the route of least damage. The human angle wouldn’t be considered or considered only as a nice-to-have.

So what will happen to humans after AI takes over?

A lion sleeps in the jungle, hunts when it is hungry and mates when it is time to do that. It doesn’t innovate, it exists and it dies. Its purpose is to survive and satisfy its basal needs. There are thresholds to those needs and when the limit is reached it stops seeking it. Lions kill for territory and to take charge of the pride, all biologically encoded behaviours. All this happens in the wild till humans decide, they want the land or Lion’s head for their living room. In other words, being the highest species means we exert the power of wiping the lower ones at our will. There is no reason to believe that AI won’t do the same. On the flip side, AI might absolve us of work and create systems for us that let us exist within fixed boundaries. Just like the lions, whom we have given a place in the zoo or the jungle. We humans can paint, climb mountains, run, act and do anything without a systematic purpose that impacts their(AI) world. What I mean is, we can do whatever the f@#k we want to as long as it doesn’t interfere with what machines are doing. When our actions impact machines negatively, they will take action to neutralize the problem. We could coexist peacefully but our history as a species tells another story.

Some closing notes:

The entire posts analyses the impact on the world as we know it today. It is very well possible that the systems that exist today will no longer be relevant in the world where AI rules. Will money mean anything to AI? Will borders mean anything to AI? Our imagination is limited to what we know today but in a world where AI will create, things could be very different.

For all you know, our idea that machines and man will be two different entities will change. We will find a way to integrate AI into the human body and make ourselves superior. That would create another beautiful mess.

That’s my very long response to an early morning question.

Winter is coming.

Here are some references I used while researching the economics part of the post, they make an interesting read :

Artificial Intelligence, Automation, and the Economy_Editor's Note: Staff from the Council of Economic Advisers, the Domestic Policy Council, the National Economic Council…_obamawhitehouse.archives.gov

The onrushing wave_IN 1930, when the world was "suffering...from a bad attack of economic pessimism", John Maynard Keynes wrote a broadly…_www.economist.com


Written by sheshank-sridharan | Product Guy & Entrepreneur
Published by HackerNoon on 2017/08/01