There are a lot of stories about AI taking over the world.
There are a lot of real experts concerned about AI taking over the world.
AI Open Letter - Future of Life Institute_Click here to see this page in other languages: Chinese Artificial intelligence (AI) research has explored a variety of…_futureoflife.org
The difficulty with AI, the argument goes, is that we only have one chance to get it right. If we build an AI that is way smarter than us and way faster than us, well we are at its mercy. If it starts acting in a way that isn’t in our best interest, it may be hard (impossible) for us to stop it. So that’s why experts have been calling for caution and regulation in AI development. Because if we screw it up the first time, that may be it. In which case, it sure would be nice if there was a reset button and we could try again…
We can — sort of. We only have one “real” world, this is true, BUT we do have the ability to create a near infinite number of simulated worlds. And these simulated worlds are only going to get more advanced and realistic as time goes on.
What am I talking about? Consider present day video games. World of Warcraft, Sims, and StarCraft are a few examples. These games provide close approximations of real world environments for Al. For example, to win a game in StarCraft you have to master resource consumption, war, and strategy. There are many real-world applications for an AI with the ability to learn these skills. Top tech companies are just beginning to get their feet wet doing this. To learn more about AI playing games, I wrote about it here:
It’s All Fun & Games until AI wins them all_Have you ever played the game breakout? You know, the video game where you bounce a ball into the blocks at the top of…_hackernoon.com
A company has an advanced AI that it wants to test before unleashing it on the world. To see how it performs, they put it in a simulated world. Here the AI “plays” through the world millions of times. The goal is to see how often the AI creates positive versus negative outcomes. The company would then take that data to improve the AI so only the positive outcomes occur. Once a certain safety margin is reached, the AI is ready to be set free in our world.
This has a strong appeal on many fronts. Government would embrace such an approach, because it would allow regulatory checks on an otherwise notoriously difficult technology to regulate. Tech companies would enjoy the ability to feature test different aspects of their AI quickly and with minimal upfront costs. Finally consumers would love this, because — well — they won’t die from a robot takeover! Everyone wins!
Frankly this seems like a very logical way to both provide safer regulatory oversight and allow businesses to keep innovating extremely quickly in AI to stay competitive in business. Are there a ton of specifics that would need to be ironed out? Yes, absolutely, but the central idea is sound.
Video games have been increasing in quality from (1972) to modern day games like (2013) with massive open worlds and many degrees of freedom. Then there are MMOs, which support thousands of players online in one match simultaneously. And in 2017 games can now have stunningly realistic graphics due to advances from the likes of Unreal Engine.
We went from pong to that in 45 years. So if we assume any rate of progress at all in game design and graphics, where will we be at in 1,000 years? Needless to say, we will eventually achieve virtual worlds that are indistinguishable from reality, video games or otherwise. But don’t think my word for it. These thoughts are from no less than Elon Musk.
Imagine a city planning game. Or better yet, a city planning simulation built for the purpose of being an AI playground. We could test the best way to incorporate self-driving cars, hyperloops, city housing, public transit, and park systems. The sheer scale of this would be immense. Of course, loading in good sample data would be crucial. We would need sensors that accurately track where and how people get around in real cities. We could leverage existing smart phones to understand where and how people move. But if that is too invasive, then there are efforts like Chicago’s Array of Things that are attempting to provide a living breathing picture of the city through data.
Once collected, such a dataset could help AI learn to build our ideal cities by virtually building millions of cities and seeing how simulated people interact with them. The AI would improve its design with each iteration. Finally, we would take all these learnings and fully know that we are about to build the most efficient city designs before even picking up a brick.
Are there alternative ways to safely test AI? Would this even work? Could a multi-million dollar industry form around testing AIs in virtual worlds? Please discuss in the comments!
iRobot