The Real Threat: Killer Robots or Killer Humans?

Written by davidpetersson006 | Published 2018/04/17
Tech Story Tags: artificial-intelligence | robots | deep-learning | killer-robots | killer-humans

TLDRvia the TL;DR App

Captain America: The Winter Soldier — the spaceships could identify potential threats based on their online activity and eliminate 3000 people at once

On Sunday, “Animal Assad” launched a heinous gas attack that killed at least 40 people, including families found suffocated in their homes and shelters, with foam on their mouths.

Chemical weapons launched against “defenceless people and populations” kill anything living in the area they are released on. While they are classified as Weapons of Mass Destruction, we are warned against another weapon that is called the “third revolution in warfare” after gunpowder and nuclear arms: Artificially Intelligent weapons.

In the new documentary called “Do You Trust This Computer?” Elon Musk warns that “at least when there’s an evil dictator, that human is going to die. But for an AI there would be no death. It would live forever, and then you’d have an immortal dictator, from which we could never escape.” He continues, “If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings.”

The more than one-hour documentary is packed with AI experts and brings lots of insight into what AI is and what it is capable of — but asks the wrong question.

AI Recap

Artificial Intelligence can be trained to do things much more efficiently than humans do. This training can be supervised or unsupervised, but eventually, it is an issue of “pattern recognition”; the pattern that makes up a human face, distinguishes a cat from a dog and determines the correct path over the wrong.

AI has been able to beat humans at Go and Poker by just looking at how they play millions of times. It has been able to beat a panel of 4 radiology experts while the developers knew almost nothing about the topic. It also won the Jeopardy, playing against two champions.

Yet, the AI had no idea of what it was doing. AI does not understand the system; only how it works. It does not understand the “why,” only the “what.” This is the core of this discussion.

Slaughterbots

This youtube video shows a possible scenario of using AI to target individuals based on their online activities:

The drone swarm is able to penetrate buildings and locate exact targets and kill them with outmost precision. Yet, what if the targets wore helmets? Or used a mask, simple to break a face recognition pattern? The AI could even fail to recognize them as people, and would miss the target, since AI is only as good as the data it is trained with.

Yes, the military could probably train the swarm to account for those scenarios as well, but again; it requires humans to that.

The AI has no purpose. It is just a machine that does the task really well. The malicious intention lie in those who assign the task.

Directing AI

The tasks AI solves were virtually impossible if someone wanted to code the system — it takes several million lines of code and years to accomplish that. AI (specifically, Deep Learning) offers a quick route; inspired by the way our brain works, this system can actually program itself. It does that by studying millions of data and adjusting its weights to produce the optimal results. As such, no one really knows how to program AI; we only train them. It’s one of the mystical parts about AI since no one knows what is actually under the hood.

AI can respond to natural language queries, but it does not understand what is being said. It can do sentiment analysis, but it does not understand emotions. Current AI systems are good at doing specific tasks — they are not a holistic entity like the human brain where it all comes together.

While that is not a necessity to produce highly efficient results in certain fields, the takeaway is that AI has no intentions — only humans do.

Can AI become conscious, and bypass human intelligence?

The “intelligence explosion” theory claims that if we create an ultraintelligent machine, that machine could then develop even better machines and the intelligence of man would be left far behind. In the words of I. J. Good: “Thus the first ultraintelligent machine is the last invention that man need ever make”.

But as François Cholletois puts it: “most of our intelligence is not in our brain; it is externalized as our civilization in the form of books, computers, mathematics, science and the internet.” He continues, “no human, nor any intelligent entity that we know of, has ever designed anything smarter than itself. What we do is, gradually, collectively, build external problem-solving systems that are greater than ourselves.”

Eventually, AI is just a tool. We just need to see who uses it.

Inevitability

While the humanitarian efforts of the AI experts to prevent misuse of their accomplishments in the military field deserve respect, there are few actual barriers to prevent AI from taking that route. When students can design AI that beats tactical experts with just $500, what holds dictatorships or terrorists back from achieving the same results? And just one of these is enough to sparkle the arms raise, each side under the pretext of “defense”.

In the end, it is the humans that are tearing each other apart. AI just makes that more efficient. If there is anything we need to solve, that is we; the human that can actually think, that can actually understand what is right and what is wrong.

AI is accused of putting people out of jobs, which will eventually lead to violence. We witnessed similar fears with the beginning of the Industrial Revolution. Yet, it happened and is the foundation of our current societies. But it is not the technological advancement that should be blamed for the violence; when people are ditched with no clear future and ways to make a living — that is what leads to violence. If people were taken care of, maybe even trained to work in the new circumstances, things could be different. But the people in charge only saw the benefits it had for them — not the benefits for the entire society.

If there is anything we need to change, and we can be successful in changing, that is this mindset.

The Lost Battle

It’s wrong to consider Hitler as just one person who changed the course of history — one must also consider the bedrock of the German society and world politics which allowed his system to emerge. Unlike the ancient times where individuals played a significant role, in modern times we are mostly faced with systems that are even able to replace people to follow the same path. Individuals do matter, but systems offer the foundation where all mortals play their exceptional role.

And right now we have systems that give dictators impunity. Unlike AI weapons that we can anticipate in the future, these atrocities are happening now. If there is one system we need to correct, it is this; the bedrock that can turn any technological achievement into an evil weapon.

Many movies (such as the Terminator or the Matrix) depict a future where humans are fighting against machines. We can’t prevent the future; AI will come and spread. The battle against the machines is already lost; but we can win the battle for humanity.


Published by HackerNoon on 2018/04/17