The future is already here, but that doesn’t mean we’re ready for it. The last two years have been rife with stories that make the average consumer think twice about the role AI has in their lives. After all, for AI to be effective, it needs data. And for companies to acquire data, they need to collect it from users who, whether aware or not, have become the world’s most valuable commodity. And if it is effective, what does that mean for millions of jobs that can now be handled by machines?
Countless data breaches, fear over job loss to AI systems, and the Cambridge Analytica scandal put all of this into a sobering perspective. Do we really want every aspect of our lives recorded and analyzed for the benefit of smarter computer systems?
And if we do, how do we balance those gains against so many understandable fears? It starts with a keen understanding of the role human intelligence plays in our implementation of AI, and how without one, the other is doomed to fail.
Wariness is understandable but consider for a moment what AI is capable of doing. Never before have we found anything in this universe as malleable and full of potential as the human brain. Except now we see computers beating humans in some of the most complex games we’ve ever devised, driving cars safer and more efficiently than any human driver, and augmenting the healthcare system to detect disease and treat people more accurately.
The ultimate goal of all this is to create a self-conscious machine. It’s the subject of Alan Turing’s eponymous test to establish the effectiveness of a computer in replicating a human.
It’s the focus of Ray Kurzweiler’s aggressive but increasingly plausible predictions of the human-AI singularity in less than thirty years. It’s also the subject of thousands of movies, novels, and breathless essays about the risk self-conscious AI poses to humanity.
In reality, what makes humans truly unique is the way in which our brain tells stories about the observations it makes, interpreting the world around us - often incorrectly.
Machines don’t need this to be truly conscious. But that power to observe and evaluate the impact of decisions not only ourselves but our surroundings is inherently human. It’s something machines likely will never replicate and is why human intelligence is so important to the AI equation.
So if machines won’t observe the world as we do, how exactly do they “think” and what can we do to influence those processes?
Machine learning has been the center of activity for the AI industry in the last half-decade, representing a massive boom in production and value across all industries. Machine Learning (ML) involves providing an algorithm with the tools it needs to improve performance without explicit human input.
Powered by Artificial Neural Networks (ANNs), machine learning has evolved rapidly in the last half-decade to emulate how humans look for and evaluate patterns in the world around us. This has enabled computers to recognize human faces, respond to cues in voice, and compete with humans in highly complex activities.
Commercially, deep learning based on ANNs burst into the mainstream in 2016 when AlphaGo beat a human Go champion - something AI experts predicted was still years away.
Four years later, deep learning is being used to improve processes in millions of businesses and computer systems. Yet, researchers in the field are wary of whether deep learning can ever truly reach the level of human intelligence.
They lack transparency in the decision-making process, and it remains unclear how portable an individual system will be in observing and learning new tasks (though there are many things being done to influence this).
Another major issue is the developers and creators behind any one individual AI system. Even in the case of deep learning systems (often especially so), there is an inherent bias. Amazon discontinued a hiring algorithm that was prioritizing phrases and language that is more often used in men’s resumes.
MIT researchers discovered that facial recognition algorithms were often severely undertrained to recognize minorities and in particular minority women. Because human operators and developers feed information to the algorithms they’ve designed, there can be an inherent bias.
A 2018 study of Silicon Valley companies found that ten large companies in the area didn’t employ a single black woman in 2016, and three had no black employees at all. That lack of diversity can have a direct impact on the data fed into these systems.
AI is a permanent part of society. It is too efficient and has already had too great an impact on that to change. Understandably, concerns over how data is collected and the bias of these systems remain. But arguably the biggest concern people have is what happens to the jobs that these algorithms make more efficient, effectively reducing the demand for workers.
And while AI will supplant some jobs that can be fully replaced by automated systems - data entry, tracking, and many customer service jobs for example - it creates just as many jobs and amplifies millions more. For AI to work, it needs human intelligence. Instead of busywork, labor is being redirected to more productive roles, often that support or work in tandem with artificial intelligence.
The more advanced technology becomes, the more people are needed to produce and manage it. Akin to the industrial revolution, which supplanted certain types of jobs but created far more, AI is a job engine that will only work in tandem with human input to capture data, manage that data, feed the algorithms that operate our systems and more.
We live in a time of change. AI systems are advanced at a rapid speed and the teams of people working on them are growing exponentially larger. While changes are needed to ensure the technology remains ethical in the application, the opportunities it presents are massive, and we’ll only get there with the application of a particular brand of human intelligence.