Artificial Intelligence and The End of Everything — The Skynet Debate

Written by mattward | Published 2017/10/28
Tech Story Tags: artificial-intelligence | ai | machine-learning | robotics | neural-networks

TLDRvia the TL;DR App

Last night I had the opportunity to attend a debate on the future of AI at ETH university here in Zurich, Switzerland. It was a fascinating discussion between Robin Hanson, author of The Age of Em and Max Daniel of the Foundational Research Institute but I think they, like most technologists missed an important variable, humanity.

The biggest difference between the two guests centered on the likelihood of Artificial General Intelligence and ability to control AI in the future — Max arguing for thoughtful control and Robin having little fear of a Skynet scenario.

This is a point that has been discussed extensively and yet in my opinion warrants additional discussion.

What happens in a winner take all scenario? We move fast and break things. When the upside is fame, fortune and going down in the history books, who thinks about control and regulation?

I’d argue it is an afterthought.

That is the challenge I see with conventional AI research.

In the past this was acceptable. Humans create and invent and ALWAYS deal with unexpected consequences. The cotton gin put a vast percentage of the workforce out of work, and led to the Industrial Revolution. It had pros and cons, and led to one of the rockiest eras in modern human existence with a dramatic upheaval of social norms and work conditions.

But net-net it was incredibly positive on the path to progress.

AI can do the same, or it can destroy us. It all depends on how it develops and how it is controlled. And society needs to have a more educated discussion about the risks and rewards, and specifically control mechanisms.

It is very very rare to control masses of subservient slaves who are better, smarter and stronger in every measurable way than their few aristocratic overlords. Revolution is inevitable and just a matter of time, and arguably just evolution…

With AI this could be the case as well. It depends on how quickly AGI develops and how humans control it.

But as a whole, humanity is a morally weak entity, focused more on instant gratification than on long term success and happiness. In business you sell “painkillers”, not “vitamins” because people only care about feeling better now, not their future health.

The examples are plentiful: unsafe sex, cramming before tests, McDonalds…. We prioritize the here and now while sacrificing the future.

And if AI is something that needs controls and planning to be safe and successful, how do we as a species ensure that?

What prevents labs and corporations around the world from working at breakneck speeds with no regard for the consequences?

[LIKE THIS ARTICLE SO FAR? THEN YOU’LL REALLY WANT TO SIGN UP FOR MY NEWSLETTER OVER HERE — AND GET SOME FREE BONUSES!]

AI has the potential to be a plaque. Scientists create the “ultimate” drug that spreads uninhibited, rapidly reproduces, mutates and evolves and quickly becomes uncontainable. And unlike a cure for cancer, the first group to create true Artificial General Intelligence will win exponentially more fame and fortune. Cancer is one problem. AGI that learns thousands of times faster than humans and solves the world’s toughest challenges is infinitely more valuable, and dangerous.

With great power comes great responsibility.

I for one am very excited about the possibilities of AI. Today applied AI/ML startups are all the rage. They are poised to make great breakthroughs in business and efficiency around the world. These are not the problem, these simple AI systems are mere math and more like party tricks and pattern matching than true intelligence.

But true AI is something else entirely. And it is morally irresponsible to assume it will behave and function with similar goals and logic as humans do.

There is a problem here. Whose responsibility is control and monitoring? Companies don’t care. They move fast and break things for massive profit.

And there is debate on this subject. But as we move into a world where more and more workers become displayed by AI (not the subject of this article), how can we utilize these people, the resources and abundance created by simple AI and startups today to create an AI defence initiative?

It is hard enough to keep hackers out. How much harder is it to perfectly constraint an entity that moves and learns a million times faster than humans with a completely different logic and brain structure — it is arguably impossible, all codes have bugs and accidental backdoors…

One small step for man, one giant leap for AI.

Some mistakes you cannot undo. There is no control + Z

Thoughts? What would you propose?

Excited about AI, driverless and future of society and cities? RSVP today via Typeform for our live streamed round table 12/11 @ 10am PST. Expert panel includes: Dennis R. Mortensen — founder X.ai, Zach Coelius — Cruise Automation angel investor, Shaun Abrahamson — VC @ Urban.us and Prateek Raj Joshi — AI research, 8x author, founder Pluto.ai. Register today.

Before you go…

If you got something actionable or valuable from this post, click the👏 button below and share the article on Facebook and Twitter so your friends can benefit from it too.

Link to original article on thesyndicate.vc.


Written by mattward | Investor, Startup Advisor, Entrepreneur, Author, Futurist disruptors.fm thesyndicate.vc
Published by HackerNoon on 2017/10/28