paint-brush
Top 10 Principles of A.I Predictionby@ThePourquoiPas
3,019 reads
3,019 reads

Top 10 Principles of A.I Prediction

by Adrien BookJune 1st, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A.I-induced automation will come in three distinct waves, and we’re only riding the first one. We overestimate the effect of Artificial Intelligence in the short run, and underestimate the effect in the long run. We cannot count on the reliance on the likes of Moore’s law to see inside the crystal ball. Most wild predictions for 2060 will seem quaint and antiquated by 2040 anyway anyway. We need to go big or go home, so I say go big and go home.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Top 10 Principles of A.I Prediction
Adrien Book HackerNoon profile picture

How to rise above the A.I noise

When trying to predict the future of A.I, a few rules must be abided by, lest one be considered “frothy around the mouth”, as I was once described by a technologically-challenged executive in his 60s (compliment taken). Below are a few principles to consider when trying to predict the future of A.I in any way, shape, or form.

1 — We overestimate the effect of Artificial Intelligence in the short run

The first part of Amara’s law (echoed by Bill Gates) is the most relevant in the digital age as we’re won’t to abandon ourselves to flashy headlines and clickbait, especially when it comes to A.I-induced automation.

Indeed, there has been a slew of research projects making a wide variety of predictions about automation-caused job loss, but those predictions differ by tens of millions of jobs, even when comparing similar time frames. This is irresponsible as new legislation might use any one of these predictions as a base for new laws, and ought to be using accurate calculations. In fact, most workers should not be in full panic territory just yet: automation will come in three distinct waves, and we’re only riding the first one. Data analysis and theoretically simple digital tasks are already becoming obsolete thanks to the creation of “basic” A.Is trained through machine learning, but this is unlikely to go much further in the next couple of years.

When writing about A.I, don’t get carried away by hyperboles, lest you be categorized as yet another fanatic by those who know better (and who’s admiration and respect one should strive for, obviously).

2 — We underestimate the effect of Artificial Intelligence in the long run

This leads us to the other side of the coin. SciFi enthusiasts regularly fail to fully embrace the future’s uncertainty in their analyses of the next 30 to 50 years. There are three reasons for this: too much unpredictability, not enough imagination and an après moi, le déluge approach to predictions.

People in the 1950s thought that everything that could be invented had already been invented, and we somehow continue to see this with A.I. Yes, machine learning can only go so far: in fact, A.I breakthroughs have become sparse, and seem to require ever-larger amounts of capital, data and computing power. The latest progress in A.I has been less science than engineering, even tinkering. Yet it’s not beyond humanity to jump-start A.I by rebuilding its models beyond “backpropagation” and “deep learning”.

The way technology has always worked is as follows: gradually, then suddenly (shout-out to my boy Hemingway). No one knows what the future holds, so I say go big or go home. That’s the only way an amateur tech predictor might get it right. Most wild predictions for 2060 will seem quaint and antiquated by 2040 anyway.

3 — Moore’s law does not necessarily apply to A.I

As much as we may like the points made above, the road to it may be bumpy and not in any way predictable.

We cannot count on the reliance on the likes of Moore’s law to see inside the crystal ball. As mentioned above, most, if not all of the modern use of A.I are a product of machine learning, which is far from the A.Is envisioned in most popular science-fiction movies. Machine learning, in fact, is a rather dull affair. The technology has been around since the 1990s, and the academic premises for it since the 1970s. What’s new, however, is the advancement and combination of big data, storage power and computing power. As such, any idea of explosive and exponential technological improvements is unfounded.

We might be stuck for a few years before some new exciting technology comes along. As such, don’t put all your predictive eggs in the same A.I basket.

4 — Using the right vocabulary when discussing A.I is important

It often seems like Clarke’s third law very much applies to the way we discuss A.I: any sufficiently advanced A.I is indistinguishable from magic. But that’s not the truth, far from it. This failure of language is likely to become an issue in the future.

As mentioned in past articles , the Artificial Intelligence vocabulary has always been a phantasmagorical entanglement of messianic dreams and apocalyptic visions, repurposing words such as “transcendence”, “mission”, “evangelists” and “prophets”. Elon Musk himself went as far as to say in 2014 that “with artificial intelligence we are summoning the demon”. These hyperboles may be no more than men and women at a loss for words, seeking refuge in a familiar metaphysical lexicon, as Einstein and Hawking once did.

Though an advocacy project disguised as a doctrine could yet be of use, I fear we may get the wrong ideas about A.I because the language we use is antiquated and not adapted to reality. As soon as magic is involved any consequence one desires, or fears, can easily be derived. In other words, maybe we need to think a little less about the “intelligence” part of A.I and ponder a bit more on the “artificial” part.

When predicting A.I, use the right vocabulary, lest the nut-cases crown you their new cult leader.

5 — A.I is nowhere near as powerful as human intelligence

And won’t be for a very, very long time.

Open-ended conversation on a wide array of topics, for example, is nowhere in sight. Google, supposedly the market leader in A.I capabilities (more researchers, more data and more computing power) can only produce an A.I able to make restaurant or hairdresser appointments following a very specific script. Similar conclusions have recently been reached with regards to self-driving cars, who all-too-regularly need human input.

A human can comprehend what person A believes person B thinks about person C. On a processing scale, this is decades away, if not more. On a human scale, it is mere gossiping. Humanity is better because of its flaws, because inferring and lying and hiding one’s true intentions is something that cannot be learned from data.

When predicting A.I in the short and medium term, don’t liken it to human intelligence. It looks foolish.

6 — A.Is are not built in a vacuum

As creators, it is our duty to control robots’ impacts, however underwhelming they may turn out to be. This can primarily be achieved by recognising the need for appropriate, ethical, and responsible frameworks, as well as philosophical boundaries. Specifically,governments need to step up, as corporations are unlikely to forego profit for the sake of societal good.

Writers who predict the future state of A.I must stop speaking about potential outcomes in a way that makes them seem inevitable. This clouds the judgement of people who could, and should, have a voice in how their data is used, the rules that are made with regards to robotic, and the ethics of any sufficiently advanced A.I.

Speak up. Demand proper regulations. Vote. All this will have an effect on A.I, one way or another. Don’t let Silicon Valley say that their inventions are “value neutral”. They built it, and they can (and should) fix it if needed.

7 — Don’t be like Hollywood

The world is unlikely to see its own HAL / SHODAN / Ultron / SkyNet / GLaDOS bring about the Apocalypse anytime soon. Yet, movies make it seem like this may happen in our life-time.

Ex Machina, I, Robot, Ghost in the Shell, Chappie, Her, Wall-E, A.I, Space Odyssey, Blade Runner… they all show that Hollywood confuses Intelligence for Sentience and Sentience for Sapience. An A.I can not ignore it’s programming. It’s simply impossible. “Ghosts in the machine” ARE possible, but only in the form of unexpected shortcuts, such as when we saw an A.I cheat by exploiting a bug in an Atari game. This was unexpected but very much within the machine’s programming, highlighting the need for a better understanding of algorithms.

Hollywood also ignore the difference between software and hardware. Yes, we have an A.I that can beat a human at chess, but the human can go home after the game and make tea, build IKEA furniture, then play some football. Have you seen robots move? Do you know how much those crappy robots cost? Millions!

To combat the cycle of fear induced by Hollywood’s versions of A.I, we need to understand what artificial intelligence is, and isn’t. A.I is very unlikely to ever become a monster. Hollywood already is one. Don’t fall for its tricks.

8 — We don’t want humanoid A.Is

Not only can we not make our own SkyNet, but we potentially will never want to.

While passing the Turing test definitely poses an interesting challenge for machines (and for their engineers), it’s not actually the goal of A.I. as we are currently building it. Artificial Intelligence research seeks to create programs that can perceive an environment and successfully achieve a particular goal — and there are plenty of situations where that goal is something other than passing for a human.

In fact, passing for a human can only have a nefarious outcome, which is why one should be wary of any company claiming to be able to do so. It’s much more profitable to build something able of assisting humans rather than imitating them.

What could possibly be the use of creating a machine able to pass for a human if the company that built it cannot find an ethical way to provide a decent ROI?

9 — Most CEOs are just as confused as you are

Below are a few quotes that perfectly exemplify how profoundly lost many CEOs are when it comes to massive change within their industries:

When Alexander Graham Bell offered the rights to the telephone for $100,000 to Carl Orton, president of Western Union, Orton replied

What use could this company make of an electrical toy?

Years later, (1943) Thomas Watson, president of IBM quipped

I think there is a world market for maybe five computers.

When the market did expend, Ken Olsen, founder of Digital Equipment Corporation, was quick to follow Watson’s footsteps by saying in 1977 that

There is no reason anyone would want a computer in their home.

And finally, my all-time favourite: Blockbuster CEO Jim Keyes, when asked about streaming on 2008, proclaimed loud and clear that

Neither RedBox nor Netflix are even on the radar screen in terms of competition.

A.I won’t change just one industry, it will change ALL the industries, sometimes massively, sometimes just a little. If, like me, you’re a strategy consultant, pay very close attention to the words used. If your interlocutor needs to ask what backpropagation is, best restart the conversation from the beginning. Very slowly.

10 — A.I cannot solve everything

Though A.I will change all industries, this in no way means it will change everything, and save the world. As previously mentioned, we over-estimate A.I capabilities hugely and tend to imbue it with qualities it just does not have.

World hunger, wars, disease, global warming… All these are still very much in our hands, and anyone saying otherwise ought to feel ashamed. We need to put a little bit of our own into it before getting robots to solve it all for us: it’d be too easy to let ourselves off the hook for all our past inefficiencies.

At the end of the day, A.I merely holds a dark mirror to society, its triumphs and its inequalities. Maybe, just maybe, the best thing to come from A.I research isn’t a better understanding of technology, but rather a better understanding of ourselves.

Join a movement

This article was originally made for The Pourquoi Pas, an online magazine providing in-depth analyses of today’s technological challenges. Click here to access it.