People are scared of AI. According to : Genpact research "71% of consumers fear AI will infringe on their privacy." When asked about their thoughts on the impact of AI, a survey of Americans revealed this: conducted by Oxford "34 percent of respondents thought it would be negative, with 12 percent going for the option 'very bad, possibly human extinction.'" Another 18% were uncertain of the impact, which means that 64% of people have an uncertain or negative view of AI. Besides the general fear, uncertainty, and negativity surrounding AI, there are a number of specific concerns, as listed by a : CNBC article "Expected " mass unemployment. "AI in military applications ." could give rise to a by 2040 nuclear war "Data-driven algorithms that automate applications by using that data — could hold over the privacy of patients." ethical implications "Fear that AI could be used for ." mass surveillance "Machine-learning that threatens to bake in ." racial, sexual, and other biases Humans fear what they don't understand, so it makes sense that highly complex systems like AI, that also impact billions of people, inspire fear. Ironically, AI was created as a tool to better understand the world, to make models that find patterns and reveal insights in how we interact with our environments. However, this greater understanding through AI is The people who better understand cancer diagnosis, self-driving cars, recommendation systems, and so on are the tiny minority of people working in the field, while everyone else is trapped in fear. highly asymmetrical. This highly prevalent fear is bound to rub-off on even the most logical, objective industry practitioners and regulators. So what can we do about it? Well, the biggest misconception about is that it's . You probably see the problem with the name alone. When a layperson hears "Artificial Intelligence," they don't think of what it really is--a series of input/output functions, like a "neural network" that connects a bunch of I/O blocks to make a pretty complex algorithm. artificial intelligence intelligent Most people don't think of the future of AI as something like this: They think of this: And that's because an can be a pretty scary idea. Something that makes its own decisions? That possesses true intelligence? That acts on its own? That learns from data to do so and always gets better? intelligent machine Well guess what, we're not even close to being there. Even our most advanced robotics and the most cutting-edge AI systems require intense human input and tuning. A robot has never decided so much as to lift a finger on its own, and most industry experts think we'll never get to that point - the point of building consciousness, free will, and intelligence into machines. At the end of the day, let's stay away from calling it "artificial intelligence." Here are some alternatives: Computational statistics. Statistical optimization. Error minimization. Machine learning. Statistics. Math.