paint-brush
Long-term Trends in the Public Perception of Artificial Intelligenceby@unignorant
2,675 reads
2,675 reads

Long-term Trends in the Public Perception of Artificial Intelligence

by Ethan FastFebruary 1st, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Artificial intelligence has a long history of boom and bust cycles.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Long-term Trends in the Public Perception of Artificial Intelligence
Ethan Fast HackerNoon profile picture

Artificial intelligence has a long history of boom and bust cycles.

During A.I. booms, money flows through universities and industry labs, fueling promised advances that often sound like magic, if not panacea. Extreme optimism was particularly common in the field’s early years. In 1960, for example, A.I. pioneer Herbert Simon suggested that “machines will be capable, within twenty years, of doing any work that a man can do,” a claim echoed by the founder of the field, Marvin Minsky, in 1961.

Despite the advances that have occured since that time — most recently, breakthroughs in neural networks, a form of machine learning inspired by the biological structure of the brain — today’s leading researchers tend to be more circumspect about the potential of artificial intelligence in the near-term. Yann LeCun, Facebook’s director of A.I. research, holds a view that is representative of most computer scientists doing work in the area:

We would be baffled if we could build machines that would have the intelligence of a mouse in the near future, but we are far even from that.

This perspective is not universal, however, especially among those outside the field. Elon Musk and Stephen Hawking, for example, have suggested that A.I. may soon be powerful enough to present an existential threat to humanity. Others such as Ray Kurzweil, Google’s resident futurist, are bullish on the likelihood of a technological singularity, a period of (literally) unimaginable technological growth that w0uld forever alter human society. There is some evidence that these more extreme points of view play an outsized role in the public imagination.

Regardless, A.I. researchers do believe the field has been undergoing another boom, and a recent study we’ve conducted provides some numbers that support this impression. In Long-term Trends in the Public Perception of Artificial Intelligence (a paper with Eric Horvitz to appear in AAAI 2017), we find that the percentage of articles covering A.I. in the news has increased dramatically in recent years (Figure 1). For example, more than four times as many New York Times articles discussed A.I. in 2016 than in 2009, as a percentage of the total number of articles published.

Figure 1: Coverage of artificial intelligence in the New York Times has exploded since late 2009. The y-axis represents the percentage of A.I. articles published in the New York Times for a given year.

Why is it important to understand what people are saying about A.I.?

Public hopes and concerns can translate into regulatory activity with serious repercussions. For example, some have recently suggested that the government should regulate A.I. development to prevent existential threats. Others have argued that racial profiling is implicit in machine learning algorithms, in violation of current law. More broadly, if public expectations for A.I. diverge too far from what is technologically possible, we may court another A.I. winter, a period of decline resulting from the smashed hopes that often follow from intense enthusiasm and high expectations.

To understand how public discussion of A.I. has evolved over time, we ran a study to analyze more than 30 years of news articles published in the New York Times. These news articles are particularly useful because they provide a signal of public opinion and engagement that extends far into the past.

In addition to measuring levels of A.I. discussion, our study labeled news articles with ratings of optimism and pessimism, distinguishing between articles that suggest A.I. will help humanity (e.g., by providing better healthcare) and those that suggest A.I. will hurt humanity (e.g., by eliminating jobs). We found that despite shifts in opinion over individual topics — for example, an increasing concern about the negative impact of A.I. on work — overall levels of optimism and pessimism have remained more or less balanced over time (Figure 1).

To generate the data for these analyses, we hired crowdworkers (humans on the Mechanical Turk crowdsourcing platform) to read and annotate paragraphs across more than 3 million stories published by the New York Times between 1985 and 2016. These annotations then informed levels of pessimism and optimism within A.I. related articles, as well as measures for the prevelance of other themes, such as the impact of A.I. on work or transportation, or the fear of loss of control of A.I.

The study tracked changes in sixteen of these themes over time (Figure 2) such as “artificial intelligence will have a negative impact on work” or “humans will lose control of artificial intelligence.”

Figure 2: Hopes and concerns from 1986 to 2016. In recent years, we see an increase in concern that humanity will lose of control of AI, and hope for the beneficial impact of AI on healthcare.

So, what trends do we find across these themes?

Perhaps our most surprising finding is that the fear of losing control of A.I. has become far more common in recent years—more than triple what it was as a percentage of A.I. articles in the 1980s (Figure 2M). For example, Scientists Worry Machines May Outsmart Man addresses this issue in 2009:

A group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

Ethical concerns for A.I. have also become more common (Figure 2L), driven in part by similar existential worries, but also by the more practical descisions that must be made by self-driving cars. For example, the article Artificial Intelligence as a Threat (2014) discusses such concerns, suggesting that — without ethics — machines may make poor decisions:

The first, more near-future fear, is that we are starting to create machines that can make decisions like humans, but these machines don’t have morality and likely never will.

These trends generally suggest an increasing public belief that researchers may soon be capable of building dangerous A.I. systems.

From a more optimistic standpoint, hopes that artificial intelligence can improve healthcare have also trended upwards (Figure 2G). For example, an article from 2003, Intel and Alzheimer’s Group Join Forces, reports:

For patients with more advanced cases, the researchers held out the possibility of systems that use artificial intelligence techniques to determine whether a person has remembered to drink fluids throughout the day.

Concerns of lack of progress in A.I. have also diminished in recent years, despite a recent uptick (Figure 2P). This concern reached its height in 1988, at the start of the most significant A.I. winter. An early example is raised in A New Chief Executive Is Named at Symbolics, from May of that year:

The artificial intelligence industry in general has been going through a retrenchment, with setbacks stemming from its failure to live up to its promises of making machines that can recognize objects or reason like a human.

Intriguingly, many recent articles that discuss the lack of progress concern themselves draw reference to these failed promises of the past. Such references are a form of meta-discussion about the role that overly high expectations played in previous setbacks for the field.

Among the remaining trends, a positive view of the impact of A.I. on human work has become less common (Figure 2F), while a negative view has increased sharply (Figure 2E). Hopes in A.I. for education have grown (Figure 2D), as has a positive view of merging with A.I. (Figure 2I) and the prevalence of A.I. in books and movies (Figure 2N).

Is there anything actionable in these findings?

Some of the dissonance we see is troubling. Experts in artificial intelligence have become increasingly skeptical about the near-term potential for radical progress in the field, but public concerns about the risks of such progress have grown in recent years. This conflict between public perception and reality could, if uncorrected, eventually lead to damaging consequences (for example, through poorly considered regulations applied to A.I. research).

Other growing concerns, however, such as the negative impact of A.I. on work or military applications of A.I., are grounded in immediate questions that face our society. These concerns should perhaps more directly shape ongoing public policy with respect to A.I. applications.

Beyond such narrow questions, it is clear that we are living through an A.I. boom much larger than any that have occurred in the past. Under these conditions, it is more important than ever to monitor the field’s evolution and its potential impact on society. This goal is spearheaded in particular by the One Hundred Year Study on Artificial Intelligence (AI100), an organization founded to study and anticipate such societal effects, which released its first public report in 2016.

The effects of new technologies, however, are usually difficult to anticipate. Understanding what the public is thinking about a technology — regardless of whether that perspective is grounded in reality — can be a critical factor. Computational analyses, such as the approach we have presented here, provide a powerful way to measure such perspectives at scale.