paint-brush
Elon Musk is Worried. Should You Be?by@grantowens
287 reads

Elon Musk is Worried. Should You Be?

by Grant OwensNovember 11th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

If you’ve been scanning technology news in the last few months you’ve probably noticed that Elon Musk is <a href="http://www.npr.org/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-ris" target="_blank">sounding alarm bells</a> about the emergence of artificial intelligence — calling it “the scariest problem” and greatest threat to civilization he sees.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Elon Musk is Worried. Should You Be?
Grant Owens HackerNoon profile picture

If you’ve been scanning technology news in the last few months you’ve probably noticed that Elon Musk is sounding alarm bells about the emergence of artificial intelligence — calling it “the scariest problem” and greatest threat to civilization he sees.

And let’s be clear, Elon Musk sees things you and I don’t — not necessarily because he is more visionary, but because in his role he simply has more access to emerging technology and technologists. People take meetings with him when given the chance, and I suspect they bring their most progressive ideas to the table. So, what exactly is it that has him so worried?

This is Old Hat

A couple of years ago as our industry began wading into the world of AI (for the purposes of brand interactions), I didn’t think much about potential side effects of our progress. We’ve been dealing with pieces of AI and building near-AI algorithms for years. To date, the threat of that technology has been limited and at times simply entertaining, certainly not an existential teeter-totter, as Musk views it.

At the time, I was able to sideline the conspiracy theory threats by suggesting that as long as we own the power supply, we’ll be fine. In other words, if things get out of hand we can just pull the plug — literally. What danger is an angry sentient robot with no current running through it?

But recently, in addition to Musk’s warnings, I’ve seen the rapid evolution of connected devices firsthand and can say with confidence, we’d never be able to cut the power early enough or broadly enough. Just as we’ve set up distributed networks for backup assurance, they can easily be used for AI resilience.

What’s Existential About the Threat?

In a recent interview at the National Governors Association meeting, Musk explains a fairly simple progression of an AI system whose incentive was to maximize the value of a certain portfolio of stocks (in this case stocks bolstered by the defense and military industry). One way the AI could maximize that portfolio would be to start a war — essentially by counterfeiting communications between two rival countries.

His anecdote reminded of the saying, “guns don’t kill people, people do”, and the common rebuttal, “but the gun sure had something to do with it” — just replace the word “gun” with “AI” and you’ve arrived at the same concern as Musk.

In this case, what Musk is worried about is the latest developments of planning and reasoning that AI is now capable of executing. It wouldn’t take much for a human to insert an incentive into the AI system and unwittingly fail to predict the logical conclusion — the endgame.

Right in Front of Us

Aside from preventing Armageddon, the creative and marketing industry also has a very big responsibility for shaping AI in a way that supports the greater good, not just one cohort. We have to ask ourselves — how can we remove bias from AI and algorithms? How can we build AI tools that are incentivized to help all people?

Right now, millions of Amazon Echo devices sit in the homes of people who are likely more affluent than the average American household, and each day the Alexa AI platform becomes smarter based on the usage within those homes. Alexa is likely learning much more about what affluent customers want and how they speak than what low income households want and how they interact. That is a deeper digital divide that may soon be very hard to bridge.

As an industry, we can enter into these efforts with empathy and conscientiousness. If we can do that, I’m hoping that the 50 U.S. governors who sat through Musk’s warnings can start working on the Armageddon scenario in parallel.

Only You Can Prevent AI Forest Fires

So, to answer the question…should we be worried? Yes, but we shouldn’t be paralyzed by the risk. We should begin to use our areas of influence to discuss the incentives, the failsafes, and the role of industry self-regulation. In this particular territory, I think we are underestimating the power of our marketing industry to influence the outcome. Many of today’s top technology resources are aimed squarely at marketing efforts, and we may see early side effects and artifacts of these efforts that others cannot.

[A version of this article originally appeared in CampaignLive on 9/7/17]