paint-brush
Don’t Fear AI, Fear Human Stupidityby@daniel.yarmoluk
930 reads
930 reads

Don’t Fear AI, Fear Human Stupidity

by Daniel YarmolukJune 25th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Last year, the famed theoretical physicist Stephen Hawking quipped that <a href="https://hackernoon.com/tagged/artificial-intelligence" target="_blank">artificial intelligence</a> was “either the best or the worst thing, ever to happen to humanity.”

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Don’t Fear AI, Fear Human Stupidity
Daniel Yarmoluk HackerNoon profile picture

Last year, the famed theoretical physicist Stephen Hawking quipped that artificial intelligence was “either the best or the worst thing, ever to happen to humanity.”

He’s not alone in that sentiment. For each proclamation that AI algorithms will transform the world for the better, there seems to be a dystopian counterargument that the technology will doom humanity. Proponents of the latter view often invoke Terminator-like imagery to drive home the risk super-smart robotic overlords will soon pose to humanity. This view, however, has nearly as much in common with Mary Shelley’s Frankenstein, first published in 1818. The main difference is that it replaces the Frankenstein monster with AI-empowered robots.

Most of these scenarios, however, don’t stand up to closer scrutiny. Why would artificially intelligent robots want to exterminate humans-especially if they are, well, intelligent? Is this worry itself rational? “It isn’t artificial intelligence I’m worried about, it’s human stupidity that concerns me,” said futurist Peter Diamandis in his Exponential Wisdom podcast, Episode 33, quoting his colleague Neil Jacobstein, who heads Singularity University’s AI division.

We’d be better served by focusing on human’s irrational thought patterns than fearing some AI-enabled bogeyman. Doing so would shed light on our illogical biases and thought patterns that can intrude into AI algorithms. It would also help understand how AI can overcome our inherent intellectual handicaps and boost productivity.

The book Thinking Fast and Slow by Daniel Kahneman is a valuable resource in this regard. The book summarizes decades of research that lead to Kahneman’s 2002 Nobel Memorial Prize in Economic Sciences. It touches on the psychological underpinnings of biases while also exploring behavioral economics. His findings also challenge the assumption of human rationality in modern economic theory.

The Brain’s Two Systems

Here’s an overview:

The Busy System and the Lazy System

System 1 (fast) operates involuntarily, automatically and quickly, with little or no effort and no sense of voluntary control. This system is responsible for our intuitions, feelings and impressions. Conversely, system 2 (slow) allocates attention to the effortful mental activities that demand it, including complex computations. System 2 requires focus and forms our beliefs. The operations of system 2 are linked with the subjective experience of agency, choice and concentration. System 2 can alter system 1 under what we call “self-control” to a certain degree. Things get trickier, however, during multi-tasking.

Attention and Effort or Cognitive Ease and Cognitive Strain

System 1 is always on, always busy and always searching for answers. It controls everything we do that is automatic, unconscious and instinctive. It feeds us with information whether we seek it or not, creates prejudices and tells us whether we “like” things or “dislike” them. System 2 performs tasks that system 1 cannot. Therefore, the two systems must work together to create a well-functioning mind. System 2 is slow and lazy. According to the law of least effort for physical exertion, this system seeks to do as little as possible. We are thus intrinsically lazy, which is why our mind employs heuristics to get the job done.

System 1 is always computing, and taps System 2 if more effort is required. If our brains sense that all is well, we benefit from cognitive ease and pleasure (we like what we see, are comfortable, etc.). If our brains identify new information, detect potential threats, vital data, or an unmet demand, we experience cognitive strain. In this state, we are less comfortable but tend to make fewer errors. These concepts simply refer to the level of effort required to process information. Humans try to avoid cognitive strain, making us especially vulnerable to biases. We seek to confirm those biases through laziness that leads to poor decision-making.

Heuristics, the Mental Shotgun

System 1 functions by jumping to conclusions, rather than carefully weighing and comes up with answers based almost what is the previous experience. System 1 uses this limited experience in intuitive thinking and jumps to conclusions only based on that evidence, not ideal when it comes to decision making.

When the mind encounters a difficult question or concept, and no immediate answer presents itself, System 1 engages in substitution, searching for a related, simpler question instead of answering the more difficult one presented. This concept is also known as the “mental shotgun.” This principle explains why humans have such strong biases and can be completely unaware of them. We can deny the existence of those biases even if pointed out by others, such as optical illusions, racial stereotypes, quick judgments about what is fair and reaching for causal judgments.

Planning Fallacy

Heuristics and biases profiled by Kahneman include anchoring, availability, narrative fallacy and outcome bias. The narrative fallacy occurs when flawed stories from the past shape the way we see the world and what we expect from the future. Outcome bias occurs when we judge the past based on outcomes rather than the unique circumstances. Hindsight bias, the tendency to think that you knew something all along when perhaps you didn’t, is another fallacy.

Risk and Loss Aversion

This bias is where we underestimate the time, risks and costs of a future task and overestimate the benefits even when we should know better.

Sunk Cost Fallacy and Fear of Regret

Risk aversion refers when exposed to uncertainty, we behave in ways to attempt to reduce uncertainty, we avoid risk even if the certain result is far less favorable than under the risky course of action.

Experiencing Self and Remembering Self

A sunk cost is a cost we already incurred, but influence our current decisions due to our aversion of risk. The price we paid is the yardstick for the potential current value.

The Accuracy of Algorithms

The experiencing self is operating intuitively and quickly and later retrieved by the remembering self, but often altered and colored by the brain.

What It All Means

When it comes to predicting the future, algorithms are almost always more accurate than people-even experts. Humans, even highly educated ones, are inconsistent and often erroneously trust their intuition or “gut.” Data passed through algorithms are far more consistent and unbiased than human interpretation. An algorithm is a step-by-step process to perform a certain function that will ultimately lead to the same result if followed correctly. The algorithm will always perform the same output if given the same outputs, and short algorithms can be created to perform complex tasks. It’s like a recipe.

We can’t always trust our thoughts and intuitions, owing to faulty biases, heuristics, fallacies and aversions. We should try to approach problems with slow thinking and systematically, trying to avoid jumping to conclusions and on problems that seem to have a simple answer. Be cognizant of our natural inclination to be susceptible by biases, stereotypes, prejudices and fallacies. Try to explore and not use lazy decision making, we do not understand the world as well as we think we do. We should think carefully and slowly when possible. Kahneman says to be doubtful of our judgments and intuition, and take our time to evaluate in thoughts that come into our minds. Our minds are hardwired for error, to exaggerate and are eager to ignore our own ignorance. Perhaps this can help us to effectively deal with issues like racism, poverty, violence and inequality-while also developing algorithms that can supercharge Internet of Things implementations.

Algorithms are not perfect, and Weapons of Math Destruction by Cathy O’Neil is a cautionary tale of faulty variable design and reinforced models based on again, human biases. However, taken holistically, data science, algorithms and artificial intelligence can aid human decision making and search for the truth. We have to realize a lot of AI will be built on machine-to-machine inputs and outputs, for examples, sensor thresholds to create action instead of human interjection. Evidence-based, data-driven people, organizations and missions can help avoid misunderstandings, conflict or make better decisions.

Originally published at https://www.linkedin.com on May 4, 2017.