AI is the future. It comes with many promises, but also with many threats. In this post, I try to put the threats into perspective.
Right now there is an intense discussion about the threats of AI. Some people are very concerned. And many of them have famous names such as Elon Musk, Steve Wozniak, Bill Gates, Stephen Hawking and many more.
Even many opponents of today’s “AI doomsday cult” admit that AI has its dangers. And the discussion about the threats is very broad. It includes:
- Undetected bias in today’s AI algorithms that could lead to unjust decisions, e.g. regarding jobs and prisoners. This already happens today.
- AI errors leading to accidents of self-driving cars and other real-world consequences. This already puts human lives on the line. And as AI spreads, consequences of errors will only become more severe.
- Most of today’s jobs will eventually be replaced by AI, potentially creating a large flood of unemployed and frustrated people. The rise of Hitler in Germany was partly caused by huge unemployment rates. When you have a lot of angry young men with nothing better to do than fight, bad things are bound to happen.
- AI-based cyberwar that enables hackers and other cybercriminals to conduct their misdeed unsupervised and at large scale.
- The possibilities of AI-based surveillance and big data analysis would make big brother jealous. In nations without free speech, this could end all hope of resistance.
- Dutifully serving AI puts some humans (the ones controlling the AI) into almost almighty positions. Taking lobbying into account, this threats the remaining “free nations”.
- AI-based warfare is very potent. Think of explosive intelligent drones, that can be produced in millions and search for targets all by themselves. Cheap technology could also be used by terrorists. This could harm thousands of people, maybe more.
- And last but not least, the technical singularity and intelligence explosion that will threat all of humanity in many different ways all by itself.
What this list shows very clearly is that “with great power comes great responsibility”. The more potent AI becomes, the more severe the threats become that it poses. And almost all of these threats are man-made. It is up to us, how we deal with bias, unemployment, surveillance and warfare. Only at the last step, when AI becomes superintelligent and truly autonomous, are we at its mercy. But this is also the most critical step in terms of the maximal credible accident. It means possible extinction. And once that level of AI is reached, it cannot be undone.
There is no I in AI
AI in its current form is far from becoming superintelligent and being a threat. Sometimes, it is even questionable whether the term intelligence is justified. But the past has shown that things can evolve pretty fast. And alarmingly few people are aware of the many real or theoretical threats that AI poses. Which is a bad thing, because it means that an immense responsibility lays in the hands of only a few people. People which do not even acknowledge that responsibility. People that act as if they found a new shiny toy.
It is clear that whoever controls AI has a dutiful servant. It can be multiplied unlimited times, works for free 24 hours a day, and can be made to analyze big data, play poker, move vast amounts of financial assets or steer unmanned drones. Whoever develops such an AI basically has unlimited power. This is the reason why an arms race has already started. It has low barriers of entry — basically everyone with a computer can participate.
As some have pointed out, current AI is absolutely safe. But there are already experiments with AI that improves itself. When that process starts, strong AI can “happen” overnight. And we don’t know what that means. It could be disastrous. Or not. We just don’t know. And even if it isn’t, AI poses many more other threats. I am pro AI — we create a startup around it after all. But still people have to know and understand.
There is a fine line between raising awareness and crying wolf. Current media seems to miss that line purposefully. After all, a loud cry creates more valuable buzz than a more restrained tone. My goal is just to raise awareness for a more responsible handling of AI. I do not want to cry wolf. I even think this is a bad idea. If we wear off this topic now, no one is interested anymore if it becomes really critical. Which I hope it doesn’t.