You know the story. AI will rise up and kill us all.\n\nDidn’t [Facebook have to shut down their latest monstrous experiment because it went rogue](https://www.thesun.co.uk/tech/3840240/facebooks-artificial-intelligence-created-a-secret-language-after-going-rogue-during-experiment/) and developed its own secret language?\n\nIt’s only a matter of time. For all we know, Skynet’s factories are cranking out an army of Terminators already! We better move fast!\n\nThe only problem is, it’s all nonsense.\n\nElon Musk ain’t helping.\n\nIt’s an “[existential threat](http://fortune.com/2017/08/12/elon-musk-ai-poses-vastly-more-risk-than-north-korea/) worse than North Korea,” he warns. Last I checked they have nukes and a little madman in power and super-AI is still confined to the pages of cyberpunk novels, so I’m not buying it. Look, the guy is a lot smarter than me and I think his [batteries](https://www.tesla.com/powerwall), [cars](https://www.tesla.com/model3) and [solar roof tiles](https://www.tesla.com/solarroof) will change the world but he’s spent a little too much time watching _2001: A Space Odyssey_.\n\nThe pop press isn’t helping either. How else do we end up with three months covering a bogus story about [Facebook shutting down their AI because it got too smart](https://qz.com/1043365/facebook-didnt-kill-its-language-building-ai-because-it-was-too-smart-it-was-actually-too-dumb/)? Guess what? It’s not true.\n\nThey shut it down because it was a crappy, failed program that didn’t do its job. Simple as that.\n\nAnd yet somehow every day a new version of that story pops up on my social media feeds: AI did something diabolical and had to be stopped.\n\n!(https://hackernoon.com/hn-images/1*wtfSDaotLiCx-EFmCudGcg.jpeg)\n\nDon’t get me wrong. I’m a [sci-fi writer](http://meuploads.com/). I love this stuff. [Terminators](http://amzn.to/2vBQQ3G)? [HAL](http://amzn.to/2fM0L1i)? [Aliens](http://amzn.to/2i46S1B)? [Star Trek](http://amzn.to/2vEd84Y)? Some of the greatest stories ever written.\n\nBut that’s just what they are: stories.\n\n**And it distracts us from dealing with _real problems we have right now in Artificial Intelligence._**\n\nEven if we wanted to stop super-intelligent machines from slaughtering us all, we can’t. Why?\n\n**Because they don’t exist and y_ou can’t create a solution for a problem that doesn’t exist_.**\n\nWe literally cannot solve this problem right now! Take something like Asimov’s famous “[Three Rules of Robotics](https://www.auburn.edu/~vestmon/robotics.html).” They’re nothing but a literary construct. [They can’t protect us](http://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410). You know why? Because that’s not how we program AI! We don’t give it a bunch of explicit rules. It figures out the rules _for itself_.\n\nAsimov imagined a solution to an imaginary problem and it won’t work because that’s not how AI actually works. All of our imaginary solutions will just look hopelessly stupid when mega-brilliant machines come calling.\n\n!(https://i.ytimg.com/vi/-O01G3tSYpU/hqdefault.jpg)\n\nThe truth is we really don’t have a freaking clue how to build true intelligence. Listen to DARPA: Today we have “spreadsheets on steroids” not general purpose AI. There is no consciousness behind it.\n\nAGI (Artificial General Intelligence) is not even in a lab somewhere. We don’t know [what kind of processors](https://www.technologyreview.com/s/526506/neuromorphic-chips/) we’ll need. We don’t know the right algorithms. We don’t even really know where to start!\n\nFor [sixty years researchers thought we were just around the corner](https://en.wikipedia.org/wiki/History_of_artificial_intelligence#The_first_AI_winter_1974.E2.80.931980) from machines that thought and acted like us.\n\nWe’re still waiting.\n\nResearchers figured if they gave it a few basic rules it would magically become Einstein in a box. Turns out, we don’t know how we do what we do **because it’s happening _automatically_ and _unconsciously_.**\n\n**We’re a black box.**\n\nIf you want to tell a computer how to recognize a cat it seems simple to you because you do it every day, but that’s because the complexity is hidden away from you. Yet if you really stop to think about it, what you do in a fraction of second involves a massive number of steps. On the surface it’s deceptively simple but it’s actually incredibly complex.\n\nNow at least our **Deep Learning systems** can recognize sounds and [pick cats out of a picture](https://hackernoon.com/learning-ai-if-you-suck-at-math-p5-deep-learning-and-convolutional-neural-nets-in-plain-english-cda79679bbe3) by figuring out those rules for itself. That’s something. But it’s not consciousness.\n\nWe have a long way to go to C-3PO and R2-D2.\n\n### What We Talk About When We Talk About AI\n\nThe real problem with talking about super-intelligent robots and existential threats is that **today’s problems are much more insidious and under the radar.**\n\nLet’s take a look at few to understand why.\n\nHere are the big ones:\n\n* AI security\n* Bias built into models\n* _Initial_ job disruption\n* Backlash from mistakes\n\n### Security\n\nThis one is a major challenge with no easy answers.\n\nWhat do I mean by security? Today it’s super easy to [corrupt Deep Learning systems by altering the data they’re fed](https://www.theverge.com/2017/4/12/15271874/ai-adversarial-images-fooling-attacks-artificial-intelligence). Hide a little snow crash-y distortion in images and convolutional neural nets go from smart to real stupid, real quick. None of those tricks would work on the dumbest people on the planet but they fool our best machines.\n\n!(https://hackernoon.com/hn-images/1*WhUn5EUXCs9fUJxeQJI_SA.jpeg)\n\nImage by [Ian Goodfellow, Jonathon Shlens, and Christian Szegedy](https://arxiv.org/pdf/1412.6572.pdf)\n\n**What we have today is _the exact opposite_ of super intelligence.**\n\nCall it dumb-smart AI or “narrow” AI.\n\nMachine Learning and Deep Learning systems have zero higher reasoning or moral compass. It’s just a box of applied statistics. There is no desire or will behind it, except our own.\n\nToday these tricks are just in a lab. But as these systems [come to dominate fraud detection](https://stripe.com/blog/a-primer-on-machine-learning-for-fraud-detection) and [supply chain logistics](https://www.economist.com/news/business/21720675-firm-using-algorithm-designed-cern-laboratory-how-germanys-otto-uses), people can and will learn to hack them.\n\n!(https://i.ytimg.com/vi/-96BEoXJMs0/hqdefault.jpg)\n\nInternational gangs will do everything they can to warp that data to hide illicit deals, grand theft and everything else in between. Even worse, you could kill someone with these tricks. If [you manage to fool a self-driving car by doctoring a street sign](https://techcrunch.com/2016/08/25/the-biggest-threat-facing-connected-autonomous-vehicles-is-cybersecurity/), you could send someone hurtling into a wall and a fiery death.\n\nWant to cover up money laundering? Corruption? Hacking AI will make it easier. People can and will learn to hack fraud detection classification systems, sentencing software, and more.\n\nThis will be the target of choice for nation states, espionage masters and black-ops squads. We now know it wasn’t some blonde CIA agent that found Bin Laden but an analytic model built by [Palantir](https://www.palantir.com/). If a foreign government wanted to attack a Bin Laden detector they might hit the database storing the satellite images or the NSA captured phone records.\n\nIf they manage to poison those databases the AI won’t know the difference. Remember, it has no higher reasoning of its own. It’ll happily gobble up the wrong data and start looking in the wrong places for terrorists.\n\nThere is also a real worry about military AI that goes well beyond Terminators. In my novel the [**_Jasmine Wars_**](http://amzn.to/2vDcIdC), a young hacker creates an AI called Swarm that coordinates attacks in thousands of places at once to disguise one real attack, which quickly overwhelms conventional armies.\n\n**After the war, the hacker destroys the system _not because he’s worried about it growing conscious_ but precisely _because it has no consciousness at all_.**\n\n!(https://pbs.twimg.com/profile_images/831025272589676544/3g6BrXCE_400x400.jpg)\n\nIn other words, it obeys anyone with the proper keys.\n\n!(https://i.ytimg.com/vi/U2iiPpcwfCA/hqdefault.jpg)\n\nAI has no morals. Skynet and the machines in the Matrix have a good point about humans. We’re kind of jerks. We don’t actually need any help killing each other, we’ve been doing it just fine since the first cave man picked up a stick to bash someone’s head in.\n\nMilitary systems that simply follow orders will follow whatever morals their creators have, _even if they have none_.\n\n!(https://i.ytimg.com/vi/LQUXuQ6Zd9w/hqdefault.jpg)\n\nIf authoritarian regimes with automated killing machines don’t scare you more than super AI they should.\n\nIn fact, super intelligent machines might just be a step up from our own idiocy. I welcome our robot overlords. Maybe they’ll do a better job than the current morons running the show.\n\nThat brings us to bias.\n\n### Bias\n\nThe sad fact is that most people can’t see objective reality all that clearly. They see a movie in their head about it and project that reality onto the world. So how can people define what is truly “good” for a model and what’s bad?\n\nYou and I might generally agree on what makes a good self-driving car. That’s pretty easy.\n\n* It shouldn’t hit anyone.\n* It shouldn’t veer off the road.\n* It should get to where you want to go.\n\nBut many other tasks are subject to the eye of the beholder and their moral compass, or lack thereof.\n\nTake sentencing software, used by judges. You probably don’t realize it but [we’ve been using AI sentencing software for years](https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/) in courts.\n\nBut how do you define a criminal?\n\nWhat we define as crime changes with time and with the people in power.\n\n!(https://hackernoon.com/hn-images/1*axEXfbE99-XDBGlF3K6Nsw.jpeg)\n\nChinese propaganda poster from the Mao era.\n\nHow China defines a criminal is much different than how you or I might. Criticize the government? Crime. [Win too many cases against the government? Crime](http://www.foxnews.com/world/2017/02/15/report-human-rights-lawyers-in-china-beaten-arrested.html). They routinely beat and jail lawyers who stand up for individual rights and the little people. They [hunt down dissidents, even in other countries](http://nypost.com/2016/04/26/the-book-that-got-this-publisher-kidnapped-by-china/).\n\nShould we teach AI’s that too?\n\nActually that’s exactly what we’ll do. Bet on it. AI will help authoritarians scale their operations.\n\nIf powerful machines making decisions about your life and hunting down dissidents doesn’t terrify you then I don’t know what will. Again we don’t need diabolical super-intelligent machines to have terrible morals, we’re already awesome at having none ourselves.\n\nEven worse are the sentencing decisions we make from criminal histories. If we let an AI chew on all the arrest records in the US for the past fifty years what will they find?\n\nA bunch of poor people. A bunch of African Americans.\n\nMaybe you don’t think that’s really a bias at all. That’s just the way it is, some things will never change. But John Ehrlichman, Nixon’s domestic policy chief and the [architect of the drug war disagrees](http://www.cnn.com/2016/03/23/politics/john-ehrlichman-richard-nixon-drug-war-blacks-hippie/index.html), in a perfect definition of how to abuse the law for dark purposes. Here’s what he had to say:\n\n> “We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin. And then criminalizing both heavily, we could disrupt those communities,” Ehrlichman said. “We could arrest their leaders. raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.”\n\nSo how will a sentencing algorithm make decisions after studying that history? You guessed it. When the judge is trying to figure out whether someone is a likely flight risk, who’s going to jail?\n\nThe same people we always put there.\n\nEven worse it will now have the _illusion of authority and impartiality_ because the “computer said so.” People don’t question computers.\n\nBut they really better start.\n\n### **Job Disruption**\n\nThat next major challenge is what to do with all the folks who get automated out of a job?\n\nNow before I go further, it’s crucial to note that job loss is the most overblown story in AI next to existential threats. In the long term, [automation ironically can _create more jobs_](https://www.wsj.com/articles/automation-can-actually-create-more-jobs-1481480200). And of course, it’s much easier to see the jobs we’ll lose than the ones _AI will create_. It’s hard to see what comes around the corner. You can’t explain a web programmer job to an 18th century farmer because he has no context for it. The Internet didn’t exist. Without the Internet there is no web programmer. We don’t know what other inventions we can’t see yet that will help mitigate the threat.\n\nAs stupid as we are at times, we’re also incredibly creative and resourceful. When problems arise we find solutions, somehow, someway. Necessity is the mother of invention. We will invent solutions as these things come to pass. We’ll have no choice. But the question is what kind of chaos do we have to live through before we come up with a real answer?\n\nMake no mistake though: Automation is a real threat.\n\nLots of people losing their jobs at once is a recipe for disaster.\n\nI have a story that I wrote fifteen years ago called [**_In the Cracks of the Machine_**](http://amzn.to/2uDGGkl), where the AI revolution starts in fast food. One greasy spoon chain goes fully automated and the others quickly follow to stay competitive. That causes a domino effect in society and we swiftly suffer mass unemployment, which leads to rage.\n\n“Fear leads to anger. Anger leads to hate. Hate leads to suffering,” said Yoda.\n\n!(https://hackernoon.com/hn-images/1*4ApJ40LAGuT-tZ7su3ERww.jpeg)\n\nMoney is worthless in Germany in the 1930s. Children play with it in stacks. From “Getty Images.”\n\nMass unemployment is a witch’s cauldron of unrest and violence. The Chinese have a proverb: “When the price of rice is high, Heaven decrees new rulers.” Germany went crazy in the 1930s for this very reason: massive job losses, economic stagnation and hyperinflation. When you have a lot of angry young men on the bread lines with nothing better to do than fight, bad things happen.\n\n[Universal Basic Income](https://artplusmarketing.com/how-we-can-deliver-a-universal-basic-income-right-now-and-save-ourselves-from-the-robots-without-e1972e22e8eb) is a partial answer but do you see governments that can barely agree on anything passing it any time soon? I don’t. In fact, I see them doing it as a desperate reaction, which is the exact opposite of what we need.\n\nDon’t get me wrong. Long term I’m bullish on AI. It can and will change the world for the better. It will help us [find cures to cancer](https://insidehpc.com/2017/05/turning-ai-cancer-data-science-bowl/) and other terrifying diseases.\n\n!(https://hackernoon.com/hn-images/1*m1QmhJhOCysbdOmM3G4Xuw.jpeg)\n\nStar Trek inspired Tricorder Xprize for building home diagnostic machines\n\nIt will save lives as it automates treatment recommendations and helps hospital staff triage patients. It will diagnose disease at home and that means more people will get the right care at the right moment, instead of when it’s too late.\n\nA massive [German retailer already uses AI to predict what its customers will want a week in advance](https://www.economist.com/news/business/21720675-firm-using-algorithm-designed-cern-laboratory-how-germanys-otto-uses). It can look at massive amounts of data that no human can and spot patterns we miss. It now does 90% of the ordering without human help. Their factories crank at top speed, their warehouses are never full of stuff they can’t sell and people get what they want before they even know they want it.\n\nAnd to top it all off, **_they hired more people,_** now that they’ve freed their staff of drudgery. That’s how AI can and will go in the long term. In the end, AI assistants and automation will likely lead to a boom in creativity and productivity.\n\nBut in the short term we might not deal with the disruption very well. And when we don’t deal with problems, nature has a way of dealing with it for us.\n\nIf you don’t patch the dam, eventually it breaks and the river drowns everyone living under it.\n\n### Backlash\n\nThe simplest problem to foresee is backlash from AI mistakes.\n\nThe fact is, humans are awful at seeing real threats. The US spent 5 trillion on anti-terrorism wars since 9/11. But the chance of the average person dying from terror is absurdly small. On the flip side, [heart disease and cancer kill 1 in 4 men in the US](https://www.cdc.gov/dhdsp/data_statistics/fact_sheets/fs_men_heart.htm). And yet we spend only about $10 billion a year on cancer and heart disease research combined.\n\nWe’re wired to see big, flashy threats, not the tiny little ones that play out over time. That’s why we’ll likely do something stupid the very first time an AI screw ups, costing lives or money, which will cripple much needed research.\n\nThe first five car pileup from a self-driving car could easily cause Congress to overreach with terrible legislation. That would set the industry back years and put us behind other countries real fast. China would leapfrog the US almost overnight if we go crazy with legislation.\n\n### Can We Talk About Real Problems Now?\n\nWe have lots of serious challenges with AI. And yet we seem utterly incapable of talking about real issues. That needs to change fast because there’s lots more, such as:\n\n* How do we audit the decisions an AI makes?\n* Can we even “fix” an AI’s mistakes when it decides to crash into a wall? There are no explicit rules to change so how do we make sure it doesn’t do the same thing again the next time?\n* When a car crashes, who pays? Who’s responsible?\n* If you don’t get a loan from an AI can a human intervene and change its decision?\n\nThe list goes on and on. It will only grow with each passing day.\n\nSo let’s focus on issues that really matter _today_ instead of ones that won’t matter for 50 or 100 years. Or else we won’t need Terminators to wipe us out.\n\nOur own stupidity will do just fine.\n\n############################################\n\n**If you enjoyed this article, I’d love it if you could hit the little heart to recommend it to others. After that please feel free email the article off to a friend! Thanks much.**\n\n###########################################\n\n#### If you love the crypto space as much as I do, come on over and join [DecStack, the Virtual Co-Working Spot for CryptoCurrency and Decentralized App Projects](http://decstack.com/), where you can rub elbows with multiple projects in the space. It’s totally free forever. Just come on in and socialize, work together, share code and ideas. Make your ideas better through feedback. Find new friends. Meet your new family.\n\n###########################################\n\n!(https://hackernoon.com/hn-images/1*2NoaJZdn-t-1HPQGiUq9-A.jpeg)\n\n[Photo credit](https://extranewsfeed.com/the-winds-of-world-war-iii-8bc369584f67)\n\n_A bit about me: I’m an author, engineer and serial entrepreneur. During the last two decades, I’ve covered a broad range of tech from Linux to virtualization and containers._\n\n_You can check out my latest novel,_[**_an epic Chinese sci-fi civil war saga_**](http://amzn.to/2gAg249) _where China throws off the chains of communism and becomes the world’s first direct democracy, running a highly advanced, artificially intelligent decentralized app platform with no leaders._\n\n#### [You can get a FREE copy of my first novel, The Scorpion Game,](http://meuploads.com/join-my-readers-group/) when you join my Readers Group. Readers have called it “the first serious competition to Neuromancer” and “Detective noir meets Johnny Mnemonic.”\n\n#### You can also check out the [Cicada open source project](http://iamcicada.com/) based on ideas from the book that outlines how to make that tech a reality right now and you can get in on the alpha.\n\n#### Lastly, you can [join my private Facebook group, the Nanopunk Posthuman Assassins](https://www.facebook.com/groups/1736763229929363/), where we discuss all things tech, sci-fi, fantasy and more.