Each week brings another announcement on the progress of artificial intelligence. The rapid pace of growth has raised concerns about whether or not AI will take jobs, and how we can control possible rogue AIs should they arise. While I am generally an optimist about where AI will lead, I do understand the concern. In the past year, we have seen many examples of AI failures or systems gone awry.
This piece looks at 3 big problems in AI that we will have to face in the future. The first is spoofing — as AIs proliferate, how do we identify them? The second is failure — how do we know when an AI fails and what can we do when it does the wrong thing? The third is compliance — how do we control AIs when they might go rogue or cause some sorts of mischief?
Why Do AIs Fail?
What makes artificial intelligence programs so different than traditional software programs is that they follow probabilistic models instead of rules. Traditionally, software has been written as a series of rules that says “IF this thing happens, THEN do this other thing.” But with AI (and deep learning in particular), that isn’t how it works. AI models tend to give probabilistic answers like “IF this happened then I’m 92% sure THIS is the next step.” The fact that they deal in probabilities rather than strict rules means they are much more flexible than rules based software, but there is no free lunch and so the tradeoff is that occasionally these AIs are wrong.
Reinforcement Learning, which is the process of re-training the AI after every action to update what it has learned, has failed spectacularly in the past 12 months on a public stage, several times. The two most interesting example are this blog post from OpenAI about a reinforcement learning model that was learning to play a video game and learned the entirely wrong way to go about it. The second is Microsoft’s Tay Bot which, when launched on Twitter to “learn,” quickly became racist and misogynistic. Imagine if these were AI processes you launched at work that went awry in similar ways.
If You Are An AI, Who Are You?
As long as there has been technology, there has been people using it for scams. We have seen people spoof websites, spoof emails, and now spoof bots. How can we tell who owns an AI, how it has been trained, and what it is authorized to do? The very flexibility that means AIs can go do new things also means the people and other AIs involved in those new things have no way of knowing for certain the identity of the AI they are dealing with.
Should We Be Afraid of Rogue AI?
How do we keep AI from going rogue? Nick Bostrom’s book “SuperIntelligence” and warnings from luminaries like Elon Musk and Stephen Hawking have raise the profile of the possible outcome that AI is out to get us. Control of autonomous agents that can “think” for themselves becomes problematic if the infrastructure they run on looks much like what we have today. How can we put some chains around such AI agents to make sure they don’t go off track?
Blockchain As A Solution
Botchain, a protocol built on top of a blockchain, is a network supported by many of the top AI and bot companies to solve some of these problems. Botchain provides a decentralized identity for AI agents that is independent of the underlying platform it runs on. Every major AI agent can register to be universally identified, giving a solution to AI identification similar to the way website certificates work today to validate website ownership.
Botchain also allows every AI agent to write regular hash functions of their activity to a blockchain so that it is immutable and inspectible by those with the encryption keys. This proves what the AI did, and when it did it, and makes sure bad actions can be discovered, analyzed, and rectified because blockchain immutability makes it impossible to “cover your tracks” and delete the data.
And finally, the consensus mechanisms of blockchain can make sure any possible rogue AIs stay under control. By having a public record of tasks an agent can do, that have to be verified by multiple blockchain nodes, we can make sure an AI doesn’t overstep its limits.
Despite the concerns that AI may cause damage in the future, emerging technologies like blockchain have the ability to transition to an AI future with more confidence. If you are an AI company that would like to participate in the Botchain ecosystem, you can sign up here.