paint-brush
How to save A.I in 3 easy stepsby@adrien-book
586 reads
586 reads

How to save A.I in 3 easy steps

by Adrien BookJuly 11th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Artificial Intelligence is a riddle so confounding that it is unclear where, and how, one would even begin to address the questions plaguing its era. Nowadays, A.I. can write and analyse books, beat humans at about every game conceivable, make movies, compose classical songs and help magicians perform better tricks. It’s intertwining with criminal justice, education, recruiting, healthcare, banking, farming… These advances alone could lead many to end the conversation there and then. Yet, such a world-wide reshuffle of various industries is bound to lead to technological and ethical soul-searching.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - How to save A.I in 3 easy steps
Adrien Book HackerNoon profile picture

Do the benefits of artificial intelligence outweigh the risks?

The philosophy of Artificial Intelligence is a riddle so confounding that it is unclear where, and how, one would even begin to address the questions plaguing its era. Is A.I a revolution or a war? A god or a pet? A hammer or a nail? Nowadays, A.I can write and analyse books, beat humans at about every game conceivable, make movies, compose classical songs and help magicians perform better tricks. Beyond the arts, it also has the potential to encourage better decision-making, make medical diagnoses, and even solve some of humanity’s most pressing challenges. It’s intertwining with criminal justice, education, recruiting, healthcare, banking, farming… These advances alone could lead many to end the conversation there and then, with overwhelming evidence that the benefits of A.I reach far and wide within society, outweighing the risks associated with such a technology.

Yet, such a world-wide reshuffle of various industries is bound to lead to technological and ethical soul-searching. And though the world is unlikely to see its own AM/HAL/SHODAN/Ultron/SkyNet/GLaDOS bring about the Apocalypse anytime soon, a multitude of varied uncertainties have nevertheless arisen.

Not least of which is the matter of automatic war-waging: the recent implementation of Google’s A.I capabilities within the American military to improve the targeting of drone strikes has raised some serious questions about the battlefield moving to data centers, and the difficulty of separating civilian technologies from the business of war. Those worries are linked to those the world’s top artificial intelligence minds, who last summer wrote an open letter to the UN warning that autonomous weapon systems able to identify targets and fire without a human operator could be wildly misused, even with the best intentions.

Amazon, Google’s current A.I arch-rival, has also widely publicised its work with government agencies. Some have begun implementing its powerful real-time facial recognition system, which can tap into police body cameras and city-wide surveillance systems. This technology is fraught with ethical quandaries: civil rights organizations argue that any technology used to record, analyse and stockpile images of faces on a vast scale will be used to target communities already besieged by social challenges.

Indeed, despite plenty of potential valid uses, industrialised facial recognition would alter civil rights, fundamentally changing notions of privacy, fairness and trust. This is no longer science-fiction: Moscow-based NtechLab has an ethnicity detection capability in its face-detection software. This would encourage overt racial profiling from authorities, as studies have shown that machine learning systems internalise the prejudices of the society that programs them.

Finally, though the technology may not be colour-blind, it is collar-blind: arguably the largest shift A.I might lead to in the near future is the systematic automation of dozens of both blue and white-collar roles. This shift is already well underway in various sector, thanks to advances in machine learning technology, and represents but one of many waves that is predicted to leave millions of employees jobless. Experts argue that workers will need to develop larger skill-sets than ever before to prepare for this future. Yet, much like other western economies, the U.S currently invests just 0.1% of its GDP in workforce training and support programs, lending little hope for a wide array of disappearing professions.

However, some of those fears, thought valid, exemplify a profound misunderstanding of the science backing these supposed technological leaps. Most, if not all the examples above are a product of machine learning, which is far from the A.Is envisioned in most popular science-fiction movies. Machine learning, in fact, is a rather dull affair. The technology has been around since the 1990s, and the academic premises for it since the 1970s. What’s new, however, is the advancement and combination of big data, storage power and computing power.

Open-ended conversation on a wide array of topics, for example, is nowhere in sight. Google, supposedly the market leader in A.I capabilities (more researchers, more data and more computing power) can only produce an A.I able to make restaurant or hairdresser appointments following a very specific script. Similar conclusions have recently been reached with regards to self-driving cars, who all-too-regularly need human input. A human can comprehend what person A believes person B thinks about person C. On a processing scale, this is indistinguishable from magic. On a human scale, it is mere gossiping. Humanity is better because of its flaws, because inferring and lying and hiding one’s true intentions is something that cannot be learned from data.

In fact, A.I breakthroughs have become sparse, and seem to require ever-larger amounts of capital, data and computing power. The latest progress in A.I has been less science than engineering, even tinkering; indeed, correlation and association can only go so far, compared to organic causal learning, highlighting a potential need for the field to start over. Researchers have largely abandoned forward-thinking research and are instead concentrating on the practical applications of what is known so far, which could advance humanity in major ways, though it would provide few leaps for A.I science.

Machine learning is clearly great for specific, specialised tasks, but a self-teaching A.I that can match or best humans across various disciplines is, for now, out of reach. Ironically, artificial intelligence may fall short of matching and besting organic intelligence for the sole reason that it wasn’t built in humanity’s image.

As creators, it is nevertheless mankind’s duty to control robots’ impacts, however underwhelming they may turn out to be. This can primarily be achieved by recognising the need for appropriate, ethical, and responsible frameworks, as well as philosophical boundaries. Specifically, governments need to step up, as corporations are unlikely to forego profit for the sake of societal good.

One such framework can be found within President Macron’s nationwide A.I plan, which not only includes $1.6 billion in funding, new research centers and data-sharing initiatives, but also, and most importantly, incorporates ethical guidelines. Much like Asimov’s 1942’s 3 laws of robotic, however, they are elegant yet inherently subjective, and as such hard to enforce. To palliate these shortcoming, one may be inclined to implement the following 3 rules. Though unaesthetic, they have the merit of being both applicable and impactful in very real ways.

1. A.I Responsibility

This rule may appear blasphemous for many free-market proponents, raised as they are in countries where tobacco groups do not cause cancer, distilleries do not cause alcoholism, guns do not cause school shootings and drug companies do not cause overdoses. Silicon Valley has understood this, and its go-to excuse when its products cause harm (unemployment, bias, deaths…) is to say that its technologies are value neutral, and that they are powerless to influence the nature of their implementation. That’s just an easy way out.

Algorithms behaving unexpectedly are now a fact of life, and just as car makers must now be aware of emissions and European companies must protect their customers’ data, tech companies must closely track an algorithm’s behavior as it changes over time and contexts, and when needed, mitigate malicious behavior, lest they face a hefty fine.

2. A.I Honesty

Put simply, a digital intelligence should state that it is a robot. The scope for mischief once robots can pose as humans is simply too large, ranging from scam calls to automated hoaxes, to wait and see what happens. Drawing once again an example from the organic world, manufacturers put Ethyl Mercaptan in normally odorless natural gas to notice a catastrophe before it’s too late. A.I should be held to the same standard. When the machines cross the “general intelligence” threshold, they may choose how they want to sound and act, but this is yet a long way away.

3. A.I Transparency

Any human can explain, with varying degrees of accuracy, why he or she performed certain actions. A.I should be expected to do the same, if not more given the superiority of their processing power. There ought to be no place in this world for black boxes making intrinsically vital decision: the technology was created to avoid just this. Without this rule, the first two cannot stand as no honesty or responsibility can be expected from a system which is not understood, and which may not understand itself.

In the face of a limited technology and a plethora of potential uses, the benefits of A.I clearly outweigh the risks. This is however no reason not to have a conversation about its implementation, before the robots start doing the talking for us. Because of the technological limits mentioned above, machines simply can’t understand the world as well as humans do. This however does not mean that ethical issues shouldn’t be addressed when we assume they can, and let them make decisions accordingly.

At the end of the day, A.I holds a dark mirror to society, its triumphs and its inequalities. Maybe, just maybe, the best thing to come from A.I research isn’t a better understanding of technology, but rather a better understanding of ourselves.

Join a movement

This article was originally made for The Pourquoi Pas, an online magazine providing in-depth analyses of today’s technological challenges. Click here to access it.