The Third Industrial Revolution, called the Digital Revolution, was characterized by the invention of the Internet and the World Wide Web.
The Fourth Industrial Revolution (the term was coined around 2015-2016) is a fusion of artificial intelligence (AI), advanced robotics, gene editing, 3D printing, and other technologies that obscure the physical, biological, and digital worlds.
How will the Fourth Industrial Revolution look? Profoundly exciting and highly problematic. New vistas will open where incredible breakthroughs can happen because AI holds enormous potential.
On the other side, robots, and not humans, will perform a high percentage of jobs, so unemployment rates and social inequality will soar, becoming significant catalysts for rebellions across the world.
There is a term to describe the uncomfortable thoughts about our jobs becoming automated: automation anxiety.
As of October 2021, there is no Wikipedia page to define this term. Yet.
But how did AI manage to unleash an industrial revolution? To answer this question, we should go back in time.
Once upon a time, AI researchers believed that the only method to automate a task was to break it down into instructions that a machine could follow.
For example, a recipe on the back of a box has easy steps to follow, with no place for second-guessing. This reasoning is called explicit knowledge, as we can clearly articulate the rules of the process. Researchers believed that using explicit knowledge was the way to teach AI about human tasks.
For a while, this viewpoint seemed to work. Any job that was rule-based or process-based became automated: ATMs, automatic car park payments, self-checkout machines, vending machines etc.
Alongside explicit knowledge, we have tacit knowledge or knowledge that we can’t express effortlessly through words. For example, have you ever tried reproducing a recipe written by your mother or grandmother? There is always a certain je ne sais quoi: “put flour as much as needed”, “mix until it feels right”, “cook until it is done”, or “cook until it smells right”.
Polanyi’s paradox explains tacit knowledge as knowledge beyond our understanding: “we can know more than we can tell”. This paradox states that we understand specific non-routine tasks intuitively, but we can’t easily extract the rules to describe those tasks.
Because of tacit knowledge, economist David Autor wrote in a research paper:
At a practical level, Polanyi’s paradox means that many familiar tasks, ranging from the quotidian to the sublime, cannot currently be computerized because we don’t know ‘the rules’.
Thus, it was believed that programs could automate only routine jobs, and jobs applying tacit knowledge couldn’t yet become automated.
Two years after Autor wrote those words, in 2016, the AlphaGo program developed by Google’s division DeepMind won over one of the world’s top GO players, four games to one. Why is this AI achievement extraordinary? Because the number of configurations on the GO board is infinite, as “there are more possible Go positions than there are atoms in the universe”.
The brute force approach normally used in chess engines is inefficient in Go, as brute force is no match for infinite configurations. DeepMind’s winning over a human meant that AI scientists started applying other strategies than the old ways of using explicit knowledge.
In 2019, DeepMind released MuZero, a computer program that learns to play go, chess, shogi, Atari games with no need to know the rules beforehand.
The big step forward with MuZero is we don’t tell it the dynamics of the environment; it has to figure that out for itself in a way that still lets it plan ahead and figure out what’s going to be the most effective strategy. We want to have algorithms that work in the real world, and the real world is complicated and messy and unknown. So you can’t just look ahead, like in a game of chess. You, you have to learn how the world works.
David Silver, the leader of the reinforcement learning research group at DeepMind, in a Wired interview
The current popularity of Machine Learning (ML) and Deep Learning (DL), which are AI fields, is based on their presumption that it’s no longer mandatory to know the rules of a system’s functionality. Practically, this means that AI has overcome Polanyi’s paradox. There is no more dependence on strict and clear algorithms rules as machines can learn tacit rules from enormous data sets instead.
This short history of AI brings us back to the perils AI possess. Historian Yuval Harari predicts that we will see the rise of a new social class, unemployed and unemployable, “the useless class”, because of AI and automation.
Just as the Industrial Revolution created the working class, automation could create a global useless class, and the political and social history of the coming decades will revolve around the hopes and fears of this new class. Disruptive technologies, which have helped bring enormous progress, could be disastrous if they get out of hand.
As hideous and shocking as the useless label is, there is no denying that billions of people from all over the world will not have the luck, means, opportunities, or stamina to keep retraining and hold non-automatized jobs. Social inequity and turmoil will only rise if billions of people can’t afford basic education, food, healthcare, housing, safety. So, are there any answers to such grim predictions? Can our society future-proof itself?
An initiative to combat these predictions is a government program called Universal Basic Income (UBI), where every adult citizen receives a fixed amount of money regularly and unconditionally. UBI is not a new concept, as proto-UBI ideas started hundreds of years ago.
The UBI movement gained traction since, as researcher Frank Kamanga remarked,
poverty by its nature is inhumane, it steals away dignity, and it denies people opportunities. A radical approach to do away with poverty at the global scale is implementing universal basic income.
There are many questions around a fair implementation of this scheme. How to obtain money for UBI? Should corporations pay UBI taxes? How to equitably share money across countries? If a corporation employs workers in Bangladesh but pays taxes in Ireland, how much of that corporation taxes’ should go to the Bangladesh government for UBI and how much tax money should remain in Ireland?
Some voices say that UBI should have strings attached, like graduating from high school or learning new skills. Arguably, these voices are concerned that there might be people who will abuse the system at the expense of others.
Unfortunately, the problem with conditional UBI is that countries that most need UBI might also have much higher levels of corruption than Western countries. And so, at micro levels, families might be excluded from getting money unless they bribe minor officials to make it look like family members passed the UBI conditions.
If nonconditional UBI is implemented instead, payments to families could become automated, with no human input. African fintech startups that allow millions of Africans to send or receive money without a traditional bank account means that it is possible to set up a transparent and automated UBI payments infrastructure.
More and more countries started to have UBI strategies. For example, Spain introduced minimum basic income against poverty spikes due to the COVID-19 pandemic. A German non-profit organization created in August 2020 a project that will give 1,200 Euros monthly to citizens that apply online through a lottery. This project will last three years, and researchers will compare results against over one thousand people who will not receive basic income.
One way to obtain money for UBI is to implement a robot tax, where companies replacing people with robots should pay a tax. As Bill Gates remarked, another benefit of a robot tax is to slow down the automation trend so that society can cope with the new reality.
The downside of this plan is that robotics companies feel they will suffer innovation penalties. There is also the risk that robotics companies will move their facilities to more tax-favourable countries if a country decides to implements a robot tax.
According to the World Economic Forum, employers will prioritize skills over traditional academic achievements:
Clearly, the future of work will not be about college degrees; it will be about job skills.
For a long time, becoming an employee was primarily linear: get an education, get a job, retire. However, following only this pattern will not allow us to remain relevant in the current workforce culture. We will need to reinvent ourselves periodically: get an education, get a job, upskill, get a job, upskill, get a job, etc.
By 2022 at least 54% of all employees will need reskilling and upskilling to respond to changing work requirements. Young people need the skills to rapidly learn, adapt, practice resiliency and take advantage of entrepreneurial mindsets to respond to this reality with the ingenuity to earn an income.
Also, the future of jobs will not only be about technical skills. Companies will want people with not easily automated skills, such as collaborative mindset, creativity, critical thinking, empathy, problem-solving, etc. These skills are called soft skills, a term coined by the US Army in the 1960s. According to Wikipedia, soft skills refer to
any skill that does not employ the use of machinery. The military realized that many important activities were included within this category, and in fact, the social skills necessary to lead groups, motivate soldiers, and win wars were encompassed by skills they had not yet catalogued or fully studied.
Image Credit: World Economic Forum
The good thing about soft skills is that they are skills. And so, we can train ourselves to use them.
Related to the top 10 skills in 2020, we notice that critical thinking is now the second top skill to use. Why is that? As Satya Nadella, the Microsoft CEO, writes in his Hit Refresh book, this might happen because even if we accept computer-generated medical diagnoses or legal decisions, we still expect humans to be accountable.
And then, critical thinking is mandatory to fight biased AI. As I wrote in my The Future of Education article, there is the dreadful danger that companies produce algorithms that contain the companies’ biases embedded into code: bias against women in Amazon’s AI recruiting tool, discrimination based on gender and race stereotypes in Facebook’s ad delivery, Google’s terrible facial recognition disaster, Google’s unprofessional hair search result algorithm, racial bias in healthcare algorithms, etc.
How to avoid biased AI? A possibility is that companies should become more diverse and inclusive. One of the best definitions for diversity and inclusion is that diversity is inviting everyone to a ball. Inclusion is inviting those people to dance.
For example, gender diversity in AI fields is minimal. According to the Alan Turing Institute,
women make up an estimated 26% of workers in data and AI roles globally, which drops to only 22% in the UK. Further, in the UK, the share of women in engineering and cloud computing is a mere 14% and 9% respectively.
We can argue that if women were good enough, they would have no problem getting a job in AI research. Surely. But look at the blind auditions that orchestras use to interview candidates. Until the 1970s, the top five orchestras in the US had fewer than five per cent women musicians. Then, orchestras started to use blind auditions.
After candidates were asked to remove their shoes (women usually wear heels) and were placed behind a screen so that the jury couldn’t deduce their gender, the female percentage in orchestras increased significantly to over 30%. Yes, but this happened decades ago. We surely are savvier about gender employment discrimination.
In 2018, Japan’s medical schools admitted they tampered with entrance exam scores for women for more than a decade so that more men would become doctors. Admittedly, school directors believed women might shorten or stop their medical careers altogether after having children.
This practice didn’t happen in only one medical school but was widespread across at least nine medical schools. It is not known how many female applicants were affected, but this practice started in 2006.
How are these stories related to AI research and women? Because they share a biased mentality about women's capabilities and aspirations (not every woman is defined by motherhood). According to the Alan Turing Institute:
Sexism, bullying, and sexual harassment are clear contributors to the high attrition rates of women in data science and AI professions. The gender pay gap, slow career progression for women, male-dominated office culture, lack of access to mentors, and gender bias in hiring are also discouraging women from continuing their careers in data science and AI. Brave, powerful women have been sharing their individual stories of workplace abuse and sexual discrimination for years.
Also, let’s not forget about other social categories such as BIPOC (Black, Indigenous and People of Color), Latinx, LGBT+, etc. So, unless we are ready to admit our gender, race, and cultural biases, AI is left in the hands of people coming from WEIRD (western, educated, industrialized, rich, democratic) countries.
Biases start in boardroom meetings and end up on the Internet streets.
Talk to any preschool teacher, and they will say taking care of 10-15 preschoolers looks like herding cats. Running, screaming, squealing, giggling, crying, hitting, eating, sleeping, sharing, unsharing, a preschool room is always full of life.
“Teacher, his apple is shinier than mine! Teacher, she took my book! Teacher, I need help with the bathroom! Teacher, when is my mummy coming?” In the span of a minute, a good teacher will know to use a harsher voice with somebody and a softer tone for somebody else.
Especially with preschoolers, a teacher caressing children’s faces when they cry will do wonders for children’s emotional being. After all, psychologist Harry Harlow’s tormenting experiments showed that monkey babies would much rather prefer to go hungry than be deprived of the warmth of what is perceived as a caregiver.
Can AI automate teaching with all its intricacies? Possibly. But the realities of online teaching in the COVID-19 pandemic showed that we are not even remotely close to automating education. I do believe that preschool, primary or secondary teaching will be some of the last jobs to automatize, if ever.
Partly because these jobs are severely underpaid, and there is almost no financial incentive to automate these jobs. And this brings us to the fact that education jobs might be more secure than white-collar jobs in the future.
At the end of their study (page 56), The Future of Employment: How Susceptible Are Jobs to Computerization? researchers Carl Benedikt Frey and Michael Osborne presented a table with different careers and their likelihood to become automated.
According to this study, data entry keyers, tax preparers, telemarketers, watch repairers have the highest probability (0.99) their jobs will be completely automated.
Archaeologists and anthropologists, audiologists, dentists, foresters, mental health and substance abuse workers, registered nurses, teachers, therapists, these jobs are predicted to have the lowest probability to be automated (<0.01). Of course, there will be AI technologies to complement these low automation risk jobs but completely automate them? We don’t know yet.
As the study was published in 2013 and in computing, this means decades ago, it is possible the researchers might be wrong in some of their predictions. Still, the study table is worth a look.
We can’t possibly keep up with everything going on as we also must protect ourselves from anxiety-inducing news. But in the face of change, do we have the luxury of postponing questions about the future of our jobs? Then, when do we ask these questions, do we have the luxury of despair, believing that everything will be doom and gloom?
We have too much to lose if technological progress is made at the expense of the vulnerable. And so, we must educate ourselves in AI matters, as there are only two categories of people. Those who already feel AI’s impact and those who don’t know how or when AI will come to their doors.
Lastly, let’s not forget that hope, like fear, can be small, but it can also cast a large shadow.
May you live in interesting times.
Previously published here.
Create your free account to unlock your custom reading experience.