Why both Musk and Zuck are right about AI

Author profile picture

@abyshakeAbhishek Anand

There exist two polarizing viewpoints when it comes to AI, and IMHO, they are both right.

When it comes to Artificial Intelligence, we have been witnessing two distinct camps with quite contradictory views to what AI could mean for the human civilization. On one hand, we have people like Mark Zuckerberg who swear by the endless possibilities AI holds for the good of mankind and who have been investing heavily on accelerating the progress made in the fields of Machine Learning, Deep Learning and Artificial Intelligence. On the other hand we have people like Elon Musk who have been quite critical in their views on AI, and they have been both very vocal and very public in expressing their concerns.

Why this becomes important is how this debate has become public over the past few months.

“I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.”

That’s Zuck. And then we have Musk.

“AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.”

So, we have two really smart people talking about the same subject in quite different ways. So, who is right and who isn’t?


One thing which we must get right first is the debate is not about AI and deep learning in general, but about one very specific application of it — Autonomous Artificial Intelligence.

What’s autonomous AI? Plainly speaking, an AI system that is capable of not just analyzing data but also of taking actions based on its analysis of that data. Actions that are aimed at delivering specific set of results.

Think of an ecommerce platform like Shopify. Let us say there existed an extension/plugin for Shopify that would empower you in leaving your Shopify store — say a dropshipping store — on autopilot mode. You simply set things up the first time, and that’s it — you are done! The system automatically determines what products should you have in your catalogue, which promotions to run on the main website, what kind of assortment should be prominently displayed to which sort of consumers etc etc. And all of this is happening without your intervention — or in other words, behind your back.

Would you love using this system? Of course you would — as long as it is making money for the business. But the first time you see it driving losses up, you would cry foul!

In a nutshell, that is the essence of the problem, and I must note here — I have oversimplified things, and have taken maybe the least malignant example I could have thought of. But, we will get to that later.


I think it is quite possible that the Media has been portraying it in a more grim fashion than Musk really feels about it. If you read the fortune article that quoted Musk as calling AI the greatest risk we face as a civilization, you will find that Musk is actually not advocating putting an end to AI research or advancements; he is actually emphasizing the need to have ‘regulations’ around this inevitable future where AI would be playing a far larger role than any tech advancement we have seen in the past several decades.

And he is not wrong. As AI keeps on penetrating more and more segments of civilized and industrialised economy, the risks associated with misuse and errors — human or otherwise — would also increase. So who would you be holding accountable then? Would you be naming a super intelligent computer program as a defendant in the lawsuit?


Plain and simple evidence — that’s it.

Look at OpenAI, a non-profit initiative by Elon Musk and Sam Altman (President, YCombinator).

“Artificial general intelligence (AGI) will be the most significant technology ever created by humans.” — reads their website.


Wouldn’t you be? After all, what’s there not to like?

Imagine a scenario where as an advertiser on Facebook, you don’t have to worry about audience profiling and targeting anymore. A few basic questions (and maybe not even that), and you are done. The ad platform takes care of everything else. What kind of ads to display, whether it should be a video ad, a graphic, or a plain simple post. Which posts to highlight for maximum impact and reach. No more choosing which posts to boost. The only thing you — as an advertiser — need to be concerned with, is your weekly/monthly ad spend limits. Facebook takes care of everything else.

Why would Facebook want to do that? After all, does it not want to just increase the revenue it can get from you? Why should or would it be concerned about maximizing your reach or impact? It’s simple really. If it can help you deliver the best campaign performances, it increases the chances of being your preferred mode of reaching out to your target audience. After all, less effort and increased performance — which advertiser wouldn’t love that?

Once again, that is just one of the countless real life applications a deep learning intelligent system can have — just within Facebook’s existing infrastructure. If we expand on that, the possibilities are endless.


Both of them actually.

I think both Musk and Zuck agree on the mindnumbing impact AI and deep learning is going to have in the coming decades, and they are both working towards being a part of that future. In many ways, they are inventing parts of that future.

But Musk is also right in expressing his concerns over AI. And this is where I would re-quote a line from movie Green Lantern: First Flight. (I think I have used this quote once before somewhere, not sure where.)

The weapon is a mighty force. The most powerful and absolute in your universe. With one exception. One…slight…imperfection. The imperfection…every weapon has. Its user.

And such is the case with AI.

We took an example of Facebook’s ad targeting as well as a Shopify revenue maximiser system. Let us look at some other examples now.


Russia. Fake news articles. Propaganda. US Presidential Elections 2016. Facebook, Twitter Ads.

You see where I am going with this, don’t you?

Human mind is susceptible. What if there was an AI system that was much better in its understanding of the preferences of every single citizen, and was capable enough of tailoring their entire news feed with the singular intent of changing his opinion on different issues, or changing his political affiliation altogether. Surely there exist all kinds of websites with polarising affiliations and views on specific topics. There are websites that would pass judgement on matters with no shred of evidence. And yet, now you will have in front of you definitive statements confirming those small, relatively non-existent doubts, may be even confirming your worst fears. Are you sure that won’t change your opinion on things that matter?

But, let’s give the system some moral fibre. Teach it to act in the best interest of human race. What are the paradigms within which it is supposed to operate? What if the system — based on the parameters and rules you taught it to take decisions on — decides that it would be in the best interests of your country if it was no longer a democratic institution, rather a monarchy or a communist regime? What happens then?

The AI may still be acting towards the greater good, but did it consider the cost you would need to pay to arrive at that greater good? Had it been given a comprehensive set of parameters, rules, regulations and scenarios to even be the best judge of what the greater good is supposed to be like? And even if it did have all those information, what happened to FREE WILL?

Food for thought!


Now let us consider an automated trading platform. A platform that trades based on a set of rules. If your platform is able to identify some market trends that can help you gain 20 cents on the dollar, great for you. But what if you trading platform also has access to medium that enables it in planting fake/wrong/dubious market insights in an effort to manipulate market sentiments to essentially help you gain the most. Does that make it right? What about the hundreds of thousands of people you made money on the expense of? What justifies it?

And what if the AI has the power to bring about instability to a region/country that can essentially affect the way the market behaves so as to make it move in a favorable direction for you? Troubling thoughts, aren’t they?


If you are unable to comprehend the real danger, think of the movie Avengers : Age of Ultron.

Ultron was an AI, wasn’t it? One that was being designed to bring peace to this world, and due to some fucked up logic, it ended up deciding that the one thing that was standing between the world and peaceful existence were The Avengers, and it was willing to annihilate part of the world to get over that obstacle. That is the whole problem with autonomous artificial intelligence, and that is why Musk is propagating so hard to bring about regulatory controls governing AI.


The problems with AI can be summarised by that one phrase, and by that quote by The Weaponers from Green Lantern : First Flight. AI is, at the end of the day, a tool — created by humans, and the robustness and intelligence of that tool is what will decide how effective that tool is. The more gaps you leave in that system, the more problematic it will be. And given by the fact that it would be quite difficult to come up with an algorithm that can factor in almost all forms of scenarios that a human mind is capable of evaluating, autonomous AI is still a hard sell. It doesn’t matter whether Zuck is gunning for it or someone else, AI is not yet at a stage where we can expect it to decide things for us. And for the sake of all mankind, I hope we would not get there for a while.

That’s it for today; see you tomorrow!

I am Abhishek. I am here... there.... Everywhere...
Click here to join the mailing list.



The Noonification banner

Subscribe to get your daily round-up of top tech stories!