The press and some tech leaders are trying to get us all scared of Artificial Intelligence (AI). Even the most daring entrepreneur of our time, Elon Musk, thinks we should be deathly afraid. I believe Elon Musk is exaggerating because there is still a long way to go before we get to Artificial Super Intelligence (ASI). And even ASI is not a scary proposition.
Just three days ago, for example, Wolfram Alpha’s supposedly AI based image recognition tool wrongly suggested that I had mushrooms in my garden when it was rabbit poop. How did I know? It took less than a minute for someone in one of my iMessage groups to confirm what it was. And, anecdotally, I’ve spoken to technologists who suggest that the large corporations advertising their AI capabilities do not yet have the capabilities they display on TV. That there is no steak behind the marketing sizzle.
So why the fear of this technology?
Where Are We With AI?
In 1954, Prof John McCarthy defined the subject of Artificial Intelligence (AI) as the “science and engineering of making intelligent machines, especially intelligent computer programs”. Like all general purpose technologies, AI will go from the current experimental state to being embedded in the fabric of most businesses. All the stories, press and fiction around the capabilities of AI suggest we are further along than we are in developing this technology. But we are still only at the stage of Artificial Narrow Intelligence (ANI), the AI does one thing and does one thing well, like Siri and the autopilot that flew your plane for close to 90% of your last flight. And, despite what the press might make you think, AlphaGo is ANI which only defeated Lee Sedol based on the help of 100 scientists and a slew of distributed machines.
Explainable AI and How It Augments Jobs
We have a long way before machines truly gain consciousness, sense and feel like human beings. Yes, today we have insurance companies using machine learning to automate and improve/personalize customer support, trading firms optimizing their trades with neural networks and AI for the automation of diagnosis. The predictions suggest that intelligent robots will completely take over manual jobs, more intelligent AI will take over analytical tasks (and subsequently roles) and eventually Artificial Super Intelligence will take over everything. And some jobs are already being lost to intelligent machines. But the fear far outstrips the actual impact. Why is this the case?
The fear comes from the black-box nature of the underlying AI process. Take Machine Learning (ML), a branch of AI, the current structure/approach is the image below. In the simplest of terms; data is fed into a model to train the model, the model learns patterns from the training data and uses that to predict patterns in other data or makes recommendations based on the learning. The underlying thinking behind the decision or recommendation the system provides is shrouded in mystery. It’s what you could call unexplainable AI. There is a lack of transparency that is uncomfortable for the average user. In some cases where the decisions are life or death situations, for example where an AI recommends surgery instead of chemotherapy, the recommendation might be counterintuitive and frightening. This is where we are with our AI tools today; they can tell us things but cannot tell us the why behind those things.
But Mr. David Gunning of DARPA is proposing a system he’s calling Explainable AI (xAI). It’s the image below. Unlike the unexplained AI above, the Explainable AI provides both an explainable model and an explanation interface that takes away the mystery of how the AI came to its decisions and provides some level of comfort to the user. Continuing the medical example, if the AI recommends surgery instead of chemotherapy the physician understands why. This comfort allows the user ownershipin the final implementation of the AI recommendations; instead of just being the hand that signs the treatment papers, the doctor can explain to the family why the recommendation is to go with surgery.
New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.
What We Fear and what is desirable
Our biggest fears about AI is that, as we think about the future of work, we see very few humans involved. Tools like xAI will allay those fears (to a certain extent) by moving our use of AI from autonomy of machines to augmentation of the human workers. While xAI project is focusing on intelligence analysis and autonomous systems (processes of interest) and focusing on classification and reinforcement learning (two machine learning approaches), the end product will be a toolkit that other researchers and technologists can modify, optimize and share with the community to enable the growth of this (for lack of a better term) user friendly AI.
According to Creative Confidence by the Kelley brothers, with every innovation is the need for viability, feasibility, and desirability. With AI
- We are learning that the technology of ANI is feasible (technical)
- We are some ways from viability for many use cases of artificial intelligence but we have enough value adding AI use cases at hand now.
- Where we are failing is in getting it to be desirable to the people who are being made to believe that AI will put them in the unemployment line.
Explainable AI will start to move us towards a place where employees can be augmented in their roles instead of replaced by technology. Towards a desirable future state for what is bound to be the next general purpose technology of our time; AI.