Josh Sephton


Be Careful What You Wish For

People are afraid of AI: there’s scaremongering from the media; any movie which depicts AI ends in disaster; Elon Musk keeps banging the drum about the risks of AI. I’d love to tell you that everything’s going to be ok, but there’s real cause for concern.

It’s not that machines will wake up one day and decide that they’d be more efficient without pesky humans getting in the way. The risk is that we won’t be able to tell the machines what we want them to do.

It’s important to understand, at least superficially, how artificial intelligence works before we dig into the risks. Let’s use face detection as an example. We show the software lots of images and tell it which ones contain faces. It then finds generalisations between all the images. It might decide it should look for a group of pixels which look a bit like an eye, and then look for another group of eye pixels in close proximity. It might also look for some pixels which look like a mouth underneath the eyes.

Every time we show it a new image, the AI will guess whether it sees a face. We train it by rewarding it for getting it right and correcting it when it gets it wrong. Because we’ve programmed the machine to try to get the most rewards, it adjusts its model (what it thinks faces look like) when we give it feedback. This is where the danger lies.

Remember King Midas, the ancient Greek, who wished everything he touched would turn to gold? “Fantastic!” he thought as he created unimaginable riches in his wake. He soon grew peckish though, as you would skipping gleefully, thinking that you’d never want for money again. Eating soon proved to be tricky, as all of the food he reached for turned to gold and became inedible.

It’s this similar inability to convey our wishes to machines which will haunt us. I want to create a robot to make coffee. In training, I rewarded it every time it made coffee and I told it to try to get as many rewards as much as possible. I’ve accidentally taught it to single-mindedly make coffee until there’s no more coffee to make! We need to tell it to pause from time to time, to let us drink the coffee at the same rate as it’s made.

Maybe Midas could have said “I wish everything I touched would turn to gold, unless I want to eat”. That’s closer to what he actually wanted, but what would have happened if his daughter fell over and needed comforting? That’s another exception. And there’d be another, and another, and another. It’s not possible to wrangle this immense power by describing the desired outcome. There are too many exceptions.

Facial recognition software or coffee-making robots are unlikely to present a material threat to human life. The scenarios highlighted do show some potential problems though.

In the near future, we’ll have robot doctors. Let’s hope whoever creates one doesn’t give it a goal of having the fewest number of clinical complaints. Easiest way to reduce complaints? Refuse to treat anyone.

We’ll soon have self-driving cars. Let’s hope whoever creates them doesn’t give them a goal of having the fewest crashes. Easiest way to reduce crashes? Don’t drive anywhere.

Even something as simple as an AI personal assistant could go wrong. Let’s hope whoever creates them doesn’t give them a goal of reducing the number of email interruptions. Easiest way to reduce email interruptions? Direct all mail to the trash.

Always thinking.

Of course, these examples assume that creators only give the AI naïve goals. In reality they’re likely to give them a number of goals to achieve. Reduce the number of clinical complaints and increase the number of patients treated successfully. There’s still hidden dangers though.

However hard we try to understand the logical cause and effects of a combination of goals, they will always be difficult to fully understand.

Automatic computer systems have been trading equities for more than a decade now. They work great, mostly, but there’s are instances where their actions defy reason. On May 6 2010, the Dow Jones Industrial Average lost 9% of it’s value in only a few minutes. It then recovered within half an hour.

It’s still unexplained. It wasn’t a single malicious act that caused this change but a complex network of systems that all worked together in unexpected ways to lower the value of the entire index.

This is what keeps Elon Musk awake at night. It’s not an evil robot uprising that scares him, it’s lots of benign artificial intelligences with their own goals behaving in unexpected ways.

There’s no need to pull the plug on AI research just yet though. We’re able to train dogs — often very effectively — without being able to tell them exactly what we want. They respond to our actions, words and tone to try to figure out what we want them to do. With enough care, we can train AI safely.

Don’t just build something, connect it to the internet and see how it behaves. That’s a surefire way to summon the demon.

I’m a software developer based in Birmingham, UK, solving big data and machine learning problems. I work for a health tech startup, finding creative ways to extract value from our customer data. I don’t have all the answers but I’m learning on the job. Find me on Twitter.

More by Josh Sephton

Topics of interest

More Related Stories