Why Our Grey Matter Will Bring Us Into Grey Areas: Ethics, Robotics & AI

Written by RuairiLuke | Published 2017/10/05
Tech Story Tags: artificial-intelligence | ethics | philosophy | technology | robotics

TLDRvia the TL;DR App

Still From Blade Runner (1982)

*Disclaimer: This piece contains spoilers from Blade Runner, Blade Runner 2049 and other sci-fi related media. Why you haven’t watched either is beyond me, especially the first Blade Runner. Go, watch it now. Seriously! This article will still be here when you get back from having your mind blown, unless of course the world ends for whatever reason*

The robots are coming! The robots are coming!” Cliched sci-fi movie lines aside, the ethics of artificial intelligence are a topic worthy of debate. Whilst I have argued previously that AI shall hugely benefit humanity, lingering questions and fears still remain amongst many, even those who advocate for greater AI development: Will they evolve into some Skynet-style entity and launch various nuclear weapons, destroying all of humanity? Where should we draw the line when it comes to developing AI and other robotic, computational or machine entities? Will those androids like the replicants of the Blade Runner series — how good was that sequel, am I right? — or the synths of Fallout 4 become a reality? If so, what does it mean then to be human if, as the *spoilers* rather badass female replicant in Blade Runner 2049 says to Ryan Gosling, the machines we create become “more human than human”?

Weaponising The Potential — AI & Weaponry

Taken From An Article in The Guardian

This has always been the main fear of AI development: We develop a machine so advanced and so intelligent it outwits us and the world ends up destroyed due to its launching of a nuclear bomb or creation of a virus that wipes out the human race. That’s usually what most people think of when you talk about the weaponising of artificial intelligence, but in truth, we’ve been doing it for years, and arguably in a more sinister way.

In a comment piece for the scientific journal Nature, Prof. Stuart Russell of UC Berkley called on fellow scientists and tech developers to consider the risks of local autonomous weapons systems, otherwise known as ‘LAWS’ . Russell noted quite rightly that every combat engagement has to be compliant with the Geneva Convention and general ‘rules of war’. War should only be sought if (a) it is necessary after the exhaustion of all other options; (b) a distinction can be made between combatants and civilians; and (c) there is a near equal proportion in terms of what the combat could gain, and what it could damage (in other words, the benefits outweigh any risks). At present, it is near impossible for AI-based systems to determine any of the above.

But here’s the thing — The technology to develop this type of weapons system is already in use and continually being developed to even more advanced levels. Think of the use of unmanned drones in Libya, or heat-seeking missiles to destroy enemy aircraft and bases. They’re all machines right, and that last one in particular knows how to find it’s target without much human control (follow the heat source its locked to and, well, boom). Whilst we don’t yet have the technology to allow machines to determine targets based off their own intelligence, it is likely that in the future, we will. Russell is right to be concerned about this potential use of AI. Whilst right now most nations agree that all machine combatants should have a “meaningful (degree) of human control” over them, what exactly we mean by “meaningful” is yet to be determined and is likely to change as the years pass and the march of technological development continues.

Indeed, so great are the ethical concerns around this issue, that an open letter was written by Elon Musk and one hundred other robotics and AI experts to the UN calling for a ban on AI weaponry, whilst similar calls have been made by physicist Stephen Hawking and Apple co-founder Steve Wozniak. That letter was written for a reason: Whilst nobody can’t quite see a Terminator-esque scenario happening due to the idea of infinite regress, we all know that machines, even intelligent ones, are liable to glitch — What happens when that missile meant for an ISIS military base ends up hitting a civilian target like a hospital or school because of a malfunction?

These Feelings Inside — Machine Emotion & The Limits of Human Morality

From Channel 4’s TV Series, “Humans

In Blade Runner and it’s sequel, the phrase “pleasure model” is commonly used to describe replicants that will, for a price, carry out whatever sick-twisted sexual fantasy you may or may not have locked away in the back of your mind. Instead of getting into a debate about whether prostitution itself should be legal, the ethical conundrum that arises from the existence of these “pleasure models” is this: is it morally right to create something that is, in all essence, a sex slave? This is theme appears not only in big-screen sci-fi flicks like Blade Runner, but also throughout the brilliant Channel 4 series “Humans”. Again, it seems that for whatever reason, one of the first things human beings — it seems mainly men — wanted to do with robots was build one so realistic and human that they could have sex with it. In Humans, a line from one such ‘sex robot’ really stuck with me: “What you do with us makes me wonder what you do with real women”.

If the androids we create are intelligent and sentient enough to develop their own morality and sense of worth, then surely they must be treated as human, or at the very least, be treated and allowed the same level of dignity as us? We campaign against the cruel use of animals in the circus or in the lab — Shouldn’t this same concern be given to whatever type of replicant or android we design? This question is pondered by Nick Bostrom and Eliezer Yudkowsky in their paper, “The Ethics of Artificial Intelligence”. As they note, current AI programmes and robotic technologies do not currently possess the ability to feel, but that could all change in the future.

If we created a system or machine that could feel pain, was self-aware and had a degree of intelligence (even one greater than our own), then morally we would place it at the same level as say circus elephants or tigers. We would — or rather, should — see its enslavement as morally reprehensible. We all look back on human slavery with disgust, but you have to remember the sad fact that it was once considered an acceptable thing defended and encouraged by the great minds and leaders of the day. Whether we will see the potential slavery of AI in the same way remains to be seen.

They Took Our Jobs! — The Use of Robotic & AI Technologies in the Workplace

From Magoda

A final issue to consider is the fact that AI and other forms of robotic and computational technology quite literally replaces human beings in the workplace. Those quick, self-services lanes that for whatever reason can’t scan that jar of pesto you want? Well, whilst they may be quicker than whoever is behind the counter, they have more than likely taken a job from a human being who needs a salary to, you know, live.

Whilst some may question whether this issue falls under the topic of morality and ethics — this isn’t as anywhere near as ethically complex as say, designing synthetic humans — there are still clear ethical implications for our continual development of AI technologies in terms of human employment and dignity. In a world where good, stable employment is a near impossible thing to find, the computerising of the workplace has caused major questions to be asked about the value of human agency and life. And when I say value, I mean literal, financial value of employing a human being.

The 2016 US Presidential election was won by Trump, not because free trade or open borders cost jobs, but because almost all manufacturing jobs are now automised, a trend that is set to have a knock-on effect in other sectors of employment, both blue and white collar — Trump won because he (falsely) offered hope to those who had lost or were in danger of losing their jobs to robotic insurgents. There are of course benefits to automation, such as the lowering of costs to manufacturers which should mean that the price of their product can be lowered, thus making it easier for the average consumer to purchase. But, if the average consumer is without a job due to such automation, then such products, no matter how cheap they could potentially become, are out of reach.

This brings into focus the question of human dignity: What are we worth if we are replaced by machines? Self-worth is an incredibly fragile thing, and something is going to have to be done — largely by governments — in order to make sure those who lose their jobs due to automation keep some form of self-worth alongside financial stability. This could be done through the introduction of basic income in one form or another, but the consensus on the topic is far from settled.

Curiosity Killed The Cat — Concluding Thoughts

At the end of Blade Runner — if you can’t tell I really like that movie — the replicant named ‘Roy’ delivers what is widely considered to be one of the best closing monologues in science fiction, and indeed, one of the most well-known in cinema history in general. In it, he recounts all that he has experienced, noting how when he dies, the memories shall fade “like tears in rain”. Aside from the whole scene hitting you right in the feels, the scene marks the point in the film where the viewer realises that the replicants Harrison Ford has been hunting for going ‘rogue’ are actually as human as he is (or may not be).

From GIPHY

As I have said before, AI and other technological advancements in the fields of robotics and computer science will on the whole be beneficial to humanity. However, the ethical concerns and conundrums that come with it, both in relation to us and to AI itself, are worth pondering. What it means to be human has always been a topic of debate amongst philosophers, anthropologists and theologians. The coming robotic revolution has the potential to make that question even harder to answer.


Published by HackerNoon on 2017/10/05