#Founder & #CEO of @retest_en (http://retest.org), bringing #AI to #test #automation.
AI is the future. It comes with many promises, but also with many threats. In this post, I try to put the threats into perspective.
Right now there is an intense discussion about the threats of AI. Some people are very concerned. And many of them have famous names such as Elon Musk, Steve Wozniak, Bill Gates, Stephen Hawking and many more.
Even many opponents of today’s “AI doomsday cult” admit that AI has its dangers. And the discussion about the threats is very broad. It includes:
What this list shows very clearly is that “with great power comes great responsibility”. The more potent AI becomes, the more severe the threats become that it poses. And almost all of these threats are man-made. It is up to us, how we deal with bias, unemployment, surveillance and warfare. Only at the last step, when AI becomes superintelligent and truly autonomous, are we at its mercy. But this is also the most critical step in terms of the maximal credible accident. It means possible extinction. And once that level of AI is reached, it cannot be undone.
AI in its current form is far from becoming superintelligent and being a threat. Sometimes, it is even questionable whether the term intelligence is justified. But the past has shown that things can evolve pretty fast. And alarmingly few people are aware of the many real or theoretical threats that AI poses. Which is a bad thing, because it means that an immense responsibility lays in the hands of only a few people. People which do not even acknowledge that responsibility. People that act as if they found a new shiny toy.
It is clear that whoever controls AI has a dutiful servant. It can be multiplied unlimited times, works for free 24 hours a day, and can be made to analyze big data, play poker, move vast amounts of financial assets or steer unmanned drones. Whoever develops such an AI basically has unlimited power. This is the reason why an arms race has already started. It has low barriers of entry — basically everyone with a computer can participate.
As some have pointed out, current AI is absolutely safe. But there are already experiments with AI that improves itself. When that process starts, strong AI can “happen” overnight. And we don’t know what that means. It could be disastrous. Or not. We just don’t know. And even if it isn’t, AI poses many more other threats. I am pro AI — we create a startup around it after all. But still people have to know and understand.
There is a fine line between raising awareness and crying wolf. Current media seems to miss that line purposefully. After all, a loud cry creates more valuable buzz than a more restrained tone. My goal is just to raise awareness for a more responsible handling of AI. I do not want to cry wolf. I even think this is a bad idea. If we wear off this topic now, no one is interested anymore if it becomes really critical. Which I hope it doesn’t.