Ronald Ashri

@ronald_istos

How to settle any “is this real AI?” debate — Measuring Artificial Intelligence

March 25th 2017

With the resurgence of interest in AI, a favorite topic of debate is exactly what counts as “real” AI. On the one hand, you have AI people complaining that the term has lost its meaning. On the other hand, you have reporters, startups, S&P500 boards and every VC firm on the planet all claiming that anything slightly complex or slightly automated is AI.

This leads to outbursts on Twitter such as:

Or this:

Natural reactions, when you consider that the press comes up with:

Note how picture has nothing to do with story but it is scary. Robots attacking cars. Run!

Or:

It didn’t really. Only kind of, sort of. But why worry about nuance.

I get it. It’s annoying. But, ultimately, does it matter? Good news is that it doesn’t matter much really. People will come up with stuff, people will build stuff. Some will work, some will not. It’s all good.

Nevertheless, a bit of mental stretching and harmless debating on definitions is always fun. After all:

So without further ado, here is my very simple guide to measuring AI. Now to be clear, this is not your run-of-the-mill sweep my floors AI. Oh, no. This is your run-for-the-hills we are all going to die AI.

Because that is a thing.

Axiom 1: It’s not how it does it but what it does

First, let us accept that AI cannot be measured based on what techniques are used to build a tool. Sorry ML, neural net people. I don’t care how complex a neural net you have, whether you are using sophisticated symbolic reasoning or if you are simulating a bee hive. While these days neural nets rule the roost that may change with an exciting new discovery. Will we stop calling it AI then?

It is not about the technology.

It is the characteristics that the final tool exhibits that are relevant.

So let’s dive into those.

1. How proactive is it?

The first characteristic is how proactive and deliberative is the supposedly AI-powered tool you are using.

A reactive tool will only respond to external stimuli in the environment. Think “smart” light switch. “Natural light has reached 30% intensity — increase artificial light by 20%”. Meh. Not going to be running countries anytime soon.

Proactivity with deliberation is quite different. Such a tool takes the initiative. Think “intelligent” diet coach bot. Unless I get up and run in the morning it will be calling my phone and barking orders. It will be switching on the lights to get me out of bed. It wants to turn me from a chubby slob to a muscular, finely-tuned human machine. It will proactively employ a number of strategies to make that happen.

A proactive tool can cause things to happen in the absence of any external stimuli.

That’s when you need to start taking a second look.

2. Does it learn?

I don’t mean how deep are its neural nets. I simply mean, can it evaluate whether its actions led to the right result and if not adjust them. If notifying me every day that I should go for a run is not having the desired effect my diet bot might want to reconsider its approach. Any tool that will simply do the same thing over and over again and not learn from its actions is not running the risk of taking over anything.

Learning means it can tell the difference between success and failure and can adapt its plans accordingly.

Please note, I don’t care if it uses the most advanced pre-trained neural network. If the learning has stopped — that’s it. You lose. Moving along.

3. How autonomous is it?

Autonomy is a tricky definition. Being part-Greek I need start by explaining that it is indeed a Greek word (aren’t they all?) and it means that you can set your own laws. In the search for the all-conquering AI it should be taken to mean that our AI can somehow reason about what it wants and create its own goals. Goals that where never defined when it was compiled into production code.

Our diet coach bot is going to have to somehow figure out that since I am ignoring every notification it is sending my way it needs to try something more disruptive. It then reasons that it needs to cancel my late night Indian curry delivery and instead order a personal coach to show up at the door. Now we are talking.

Autonomous AI means delegating to machines not just decision-making in a narrow sense but the choice about what end objective it should follow.

Think of it this way. We call cars that drive themselves autonomous. Sure enough, they get to make most of the decision-making along the path from A to B. However, they cannot decide that instead of taking us to point B they would prefer to take us to point C. When that starts happening that is when they are truly autonomous.

4 . How creative is it?

This is an an attribute that hardly ever comes up.

As humans we are able to make some crazy connections. We call them crazy because there are no obvious links from idea A to idea B, but put them together and you have rainbow ice-cream.

Crazy idea

The field of computational creativity tries to get software algorithms to be creative as well. Creativity is arguably essential to coming up with novel solutions to problems. Our diet bot cannot just formally reason about the world so as to come up with ways to make us lose weight. It’s going to have to throw some stuff at the wall and see what sticks. Think laterally - think outside the code if you will.

Creativity is the ability to generate novel ideas and evaluate their effectiveness for the task at hand.

5. How connected is it?

In order for our diet bot to cause true mayhem in our life, and put its crazy ideas to action, it needs connectivity. It needs to be able to plug into different APIs and communicate with other AIs so as to cause things to happen. That’s how it gets to order a personal coach, cancel our take-out, change our grocery order and so on.

AI needs to connect to other AIs and services. It needs to be able to co-operate, co-ordinate and compete in the real world.

The diet coach AI did it.

The minute it is connected and can communicate with other diet bot AIs they may jointly determine that in order to maximise their individual efficiencies they should co-operate. Following an exchange of ideas they realise that in order to radically reduce the rate at which slob humans consume calories they actually need to eliminate the root cause. This root cause is defined as the overabundance of stuff to eat which is caused by the ease with which humans can produce them. As such, they calculate that it is acceptable to sacrifice 20%–30% of the population and proceed to detonate a nuclear bomb in the atmosphere, causing a huge nuclear electromagnetic pulse taking civilisation decades back and forcing us to once more toil in the fields for our food. Clearly, they do not do that before building a hardened bunker where all the diet coach AIs go to live so that they can re-surface at such time that we have managed to restore electricity.

True AI

There you have it. A proactive, learning, autonomous, creative and connected diet coach bot could take down the world as we know it.

Turns out the predictions where true.

In order to achieve it, however, it has to have those 5 characteristics.

So next time you meet Flippy fear not. Flippy is not about to take over the world. Flippy just wants to cook awesome burgers.

At most, robots like Flippy will help us relax and work less and enjoy life more (if we can figure out solutions to things like universal basic income). Just don’t tell my diet coach bot that I am resting. That thing is relentless.

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMI family. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.
If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!

More by Ronald Ashri

More Related Stories