In Los Angeles, formerly quiet residential neighborhoods suddenly rattle under the din of continuous rush hour traffic. Residents are furious, and protest that their home values have decreased and their children can no longer safely play outdoors. Drivers, meanwhile, are relieved to save ten minutes of their commute from the city’s infamously snarled freeways. The culprit, or savior, depending on whom you ask, is the intelligent routing algorithm of smartphone app Waze. Informed by real-time traffic data, Waze sends drivers on unconventional routes to avoid congestion. As with many cases of applied artificial intelligence, the algorithm optimizes specific variables to greatly benefit certain people, while unforeseen costs are borne by others. Mitigating the adverse outcomes of intelligent systems, while expanding their benefits to more people, will be one of the great challenges of the coming years.
Let us look back at a more extreme example of the power of technological advantage. During Western Europe’s Age of Discovery, royal cartographers synthesized dubiously collected data into the most advanced proprietary information systems of the day: maps. Teeming with information on areas rich in resources and ripe for exploitation (and sorely lacking detail in locations deemed to be of less potential profit), such maps were instrumental in securing the riches of the New World. The subjugation of its inhabitants to those who created and controlled these new systems of mass information came as a matter of course.
The risks we face today also stem from the unrestrained exercise of modern information sciences on an ill-prepared and unprotected society. The single greatest of these risks is that AI exacerbates and automates inequality in an already highly unequal and fragmented world. This is not a guarantee — with proper foresight and thoughtful preemptive policy, the benefits of the impending era of pervasive AI can be more evenly dispersed, and these risks reduced. AI has the potential to be a uniquely powerful, instrumental tool in solving many of the world’s problems: we can better identify and control disease, make transportation safer, and optimize the allocation of resources and benefits to those most in need. But left exclusively in the hands of those with the access and know-how to design and implement such systems, AI is ravenous for data, incomprehensible and unchallengeable to those most at risk from its processes, and operates with a single-minded focus on optimizing variables to reinforce the status quo.
Job automation is popularly recognized as one of AI’s biggest threats, and while it’s not entirely clear when exactly people will begin to lose their positions to machines en masse, the processes which will lead to that outcome have certainly begun. What is quite apparent is not all jobs are at the same risk of automation, and that those which disproportionately fall under the umbrellas of administrative, ‘low-skilled’, or repetitive work. While a fair number of white collar roles will be reduced (or at least have their responsibilities altered), a significantly greater amount of blue-collar work is at risk. A decrease in available lower-income jobs is a substantial problem of its own accord, but it will be accompanied by a vast increase in the wealth of companies providing AI services, along with their upper-class employees. More often than not, these will be the large technology firms that have already enriched themselves in previous cycles of the information age, using the heaps of user data they have amassed to train their nascent AI programs.
In the worst future, workers will be displaced by cutting edge industrial AI and then have their unemployment benefits distributed by some horrendously designed government services AI. Photo by abi ismail on Unsplash
But artificial intelligence is likely to increase social inequality in other ways too. Cost-cutting governments often seek to more efficiently allocate limited benefits, and today the distribution of social services is often a task partly informed by algorithms. As AI services become cheaper and marketed specifically for this work, the poor will more frequently find themselves at the sole mercy of these expanded systems, with little recourse to the accompanied and active judgment of caseworkers. The expansion of credit scoring systems is also likely, particularly in countries where this has been made an explicit priority for AI development, such as China. Marginalized populations, lower-income workers, and others who do not fit neatly into an algorithm’s categorization logic stand to suffer from greater public and private reliance on these AI services, especially without forward-thinking legal protections.
For these reasons, preemptive and specific policies on artificial intelligence practices are necessary. But where exactly to start? With anything as transformative and lucrative as the coming age of AI ubiquity, regulation needs to be carefully defined and thoughtfully applied, in order to balance the interests of both AI developers and the citizens whose lives their work will impact. Only with such policies will the benefits of AI have even a chance at offsetting the myriad risks this technology poses to society. Without regulatory foresight, the most substantial gains from artificial intelligence will accrue to the firms who are already rich enough to carry out the massive investments in research and technology that underpin AI. As mentioned previously, this will only exacerbate technological and economic inequalities, which are already in a worrisome state.
The time to establish norms, rules and specific goals for an equitable AI society is now before the technology reaches its full potential. Promising steps have been taken in this direction by forward-thinking researchers and policymakers. However, the investment in economic and legal planning has yet to match the potential for AI to render immense changes to broad swathes of society. The richest and most powerful companies in both the United States and China, the world’s two largest economies, have staked their futures on commercializing artificial intelligence. Therefore, it is essential that democratic governments and international bodies keep pace to channel development in ways that benefit both citizens and corporations.
One of the most vexing, yet rewarding, elements of AI is the inherently ‘black box’ nature of its calculations. Machine learning algorithms are valuable and effective because they can act on patterns in data that human analysts could never identify; this is also what makes their outcomes difficult to predict and regulate, something to which the residents of Los Angeles’ formerly quiet neighborhood streets could attest. For this reason, an increased focus on studying these potential emergent effects must become a central part of AI development, a task which may be mandated if necessary. This sort of ‘disparate impact testing’ ought to be fully integrated into firms’ regular processes, not only during initial development but as part of regular internal audits to ensure the system is functioning properly.
Artificial intelligence services require massive amounts of data (referred to as ‘training sets’ in the parlance of machine learning) in order to identify minute patterns and continually refine their prediction logic. This is one reason that the largest technology firms have taken such a commanding lead in AI research and development, as opposed to nimbler startups, who have not had the opportunity to collect and store years of data on billions of users. Personal data has recently become somewhat of a lightning rod in public discussion, largely due to Facebook’s Cambridge Analytica scandal. As the increasing value of user data to booming AI firms becomes clearer to those who inadvertently provide the data, it is likely to become even more of a public issue. Put another way, the damage was done by Facebook’s lack of proper data stewardship yesterday and today could easily be dwarfed by the fallout from a misuse of data in the age of AI.
This calls for a stronger, globally enforced set of rules regarding personal data rights. Europe’s new General Data Protection Regulation (GDPR) is a clear step toward this solution, but questions remain: does the EU have the capability to stringently police offenders of the law in fields as opaque as digital ad auctions? Will it patch up some of the law’s major loopholes, which allow firms to collect and process data based on “legitimate interest” of the business without obtaining the consent of users? And will the United States, keen to not lose a perceived ‘AI race’ with China, ever impose similar regulation on its most competitive technology firms? In order to best protect citizens’ data from potentially catastrophic misuse in poorly designed AI systems, the answer to each of these questions must be in the affirmative.
If these regulatory goals can be achieved, the benefits of AI will be worth the risks. This is a tall order, which requires coordination and enforcement on a global scale between national governments, international bodies, and technology firms, whose interests often do not align perfectly. But if action is not taken soon, before the age of true AI ubiquity, we will lose the opportunity to ensure a more just and equitable digital society. History has shown time and again the dangers of immense technological imbalances at any scale, from the dehumanizing European conquest of the Americas to the more mundane risks of rerouted Los Angeles traffic. As we prepare for both the extraordinary benefits artificial intelligence has to offer humanity, we must recognize that immensely consequential risks abound and that today is the only chance we have to shape the AI future we will soon live in.