The bot they told you not to worry about

Written by eripsa | Published 2017/07/27
Tech Story Tags: artificial-intelligence | safety | elon-musk | mark-zuckerberg | ethics

TLDRvia the TL;DR App

Early last week, Elon Musk reiterated claims that “AI is a fundamental risk to humanity”, and for the first time Mark Zuckerberg weighed in, calling Musk’s worries “irresponsible”. Several prominent AI researchers immediately sprang to Zuckerberg’s defense on social media, especially Yann LeCun, Andrew Ng, and later, Rodney Brooks.

To my knowledge this has been the biggest open dispute within the AI community during this most recent boom (starting ~2010). It is shaking up the established consensus and forcing people to choose sides along the newly developing fault lines. Despite all the hype, the AI community is still fairly small, so this dust up has been a Big Deal.

Ian Bogost writes in the Atlantic:

When figures like Musk and Zuckerberg talk about artificial intelligence, they aren’t really talking about AI — not as in the software and hardware and robots that might produce delight or horror when implemented. Instead they are talking about words, and ideas. They are framing their individual and corporate hopes, dreams, and strategies. And given Musk and Zuck’s personal connection to the companies they run, and thereby those companies’ fates, they use that reasoning to help lay the groundwork for future support among investors, policymakers, and the general public.

On this front, it’s hard not to root for Musk’s materialism. In an age when almost everything has become intangible, delivered as electrons and consumed via flat screens, launching rockets and digging tunnels and colonizing planets and harnessing the energy of the sun feel like welcome relief. But the fact that AI itself is an idea more than it is a set of apparatuses suggests that Zuckerberg might have the upper hand. Even if it might eventually become necessary to bend the physical world to make human life continuously viable, the belief in that value starts as a concept, not a machine.

Bogost is correct, of course, that this nerd fight is all marketing and politicking. But his analysis stays firmly in the realm of these billionaire’s ideas, and seems too enthralled with the utopianism on both fronts to really consider how this fight shakes out for we peons on the ground.

My own streams are filled with AI aficionados of various stripes making these developing alliances explicit and feeling out where the new consensus lies. Anecdotal evidence reveals the following superficial pattern:

  • Nearly everyone associated with the more New Age-y or eschatological wings of the Singularity movement (especially LessWrong) is aligning with Musk.
  • Save a few high profile exceptions (especially Stuart Russell), nearly everyone tied directly to Corporate Tech Stacks (Google, FB, Baidu, etc.) is aligning with Zuckerberg.

For the audience, the cumulative effect is to make the Muskovites appear like unhinged cult conspiracy theorists, and the Zuckers to project an image of Responsible Corporate Professionalism(TM). This budding narrative is already enough to hear the echoes of more general fronts between isolationist libertarian vs neoliberal universalism shaping politics across the planet. In some ways this fight in AI is a microcosm of the Trump vs Clinton nightmare we‘re somehow still trapped inside. From this perspective, Bogost’s suggestion that Zuckerberg “might have the upper hand” does not provide much comfort.

Below, I’ll discuss how this battle for ideas intersects with the broader sociopolitical and economic landscape, and what it means for the future of AI.

It’s 2017. The future is bleak and there’s a lot to discuss.

The most convincing argument for AI risk is an analogy to the Sorcerer's Apprentice. No, seriously. See also.

The Muskovite isolationist wing of the AI community is composed of the Singulatarians, the Transhumanists, the LessWrongers, the Sea-Steaders, the Blockchainers, and the hopeful Martians. Beyond the existential threat of AI, these groups are terrified of our planet and the future, they’re distrustful of both humanity and State Capitalism, and they’re desperately looking for ways to get off the grid and protect what they have. This cluster of concerns makes them uneasy allies with the states rights, alt-right, gun rights, red pill, tea party, wall building truthers that currently occupy formal positions of power in the United States government.

Musk’s technovision is more directly informed by science fiction and accelerationist capitalism than the overt racism of the political right, but the reactionary lessons he draws yield many of the same practical consequences. It’s no mystery why Musk played ball with the president for so long: he saw in Trump’s institutional pillaging opportunities for developing his own grand vision. Musk finally bailed ship over climate change. Not out of concern for environmental justice or ecological integrity, but because Trump denies the critical context in which Musk’s sprawling portfolio coheres.

For what it’s worth, the discussion of existential risk and “Friendly AI” largely arose outside the academic journals and in the LessWrong forums, an online community with historical ties to the alt-right. While they now distance themselves from that past, the strategy of overwhelming the media until everyone’s talking about what you think they should be talking about is an uncomfortably familiar play from their cousin’s book. The so-called “alignment” arguments rose to prominence largely through the sincere engagement of Nick Bostrom, the philosopher at Oxford responsible for that other perniciously headline-grabbing thought experiment of our day, the Simulation Hypothesis. Thanks to the hard work of Bostrom’s institute, social media headlines have been successfully screaming about living in simulations and robots destroying humanity for years.

In Zuckerberg’s vision for America. not just Pepsi but all soft drinks brands can be used by minor celebrities to stop police violence.

In contrast, the Zucker universalist wing is attracted by Mark’s Pepsi-commercial talk of “building global community” in the emphatic singular. Facebook’s homogeneous, centralized, corporatized, police state “community” fits snugly within the neoliberal hegemony that continues to characterize the politics of the mainstream left and the tech giants who woo them. Those tech giants have invested heavily in AI over the last few years, and persistent public fears over the risks associated with AI remain a major threat to these investments.

For instance, when Google announced it was selling off its robotics company Boston Dynamics in mid 2016, the rumor was that Google feared public concerns about robots were undermining the safe, friendly corporate image they hoped to cultivate around their driverless cars.

“There’s excitement from the tech press, but we’re also starting to see some negative threads about it being terrifying, ready to take humans’ jobs,” Google director of communications for Google X Courtney Hohne wrote.

Rather than address these prejudices against AI head on, Google divested. The lesson is that hype around killer robots is bad for business and brand identity—unless, of course, your brand depends on escalating public fears and distrust in mainstream tech. In 2015, Musk donated $10 million to AI safety research, ensuring a steady stream of terrifying headlines in the press. A year later, Google puts their wicked awesome robot company up for sale. These tensions around safety and hype have been building in the AI sector for a long time.

Unrestrained hype was arguably responsible for the first AI winter, and the new generation of AI boomers are eager to avoid repeating these mistakes. Zuckerberg’s rebuke of Musk represents not just Mark’s personal opinion (obviously informed by his expert advisers, especially LeCun), but also the opinions of Mainstream Corporate Tech, and their financial and political interests during this boom in AI. Zuckerberg’s cool, professional handling of a divisive issue spanning policy and industry felt not only like first-class brand management, but also like the orchestrated machinations of a political campaign. Zuckerberg was demonstrating for the world how he would effectively deal with the more fringe elements of his “base”. It was the kind of media spectacle that makes a Clinton donor breathe easy.

Like Trump vs Clinton, the Zuckerberg wing of this AI debate holds more money, expertise, political influence, media savvy, and a much stronger aura of responsible professionalism than their opponent, all of which is good reason to expect them to have the advantage in this debate. At the same time, we now live in Donald Trump’s America, so who the fuck knows anymore.

It’s worth pausing here to consider that the alliances and commitments described above have basically nothing to do with the nature of the risks posed by AI, which (as Bogost explained) is irrelevant to this discussion. Like Trump vs Clinton, the debate ultimately boils down to whether we trust the Corporate State to protect the social order. The Muskovites resolutely do not, and the Zuckers are a little hurt that they haven’t earned our trust already.

“But my circle is virtuous” — an AI Expert

This dichotomy was made most clear in the video above from vocal Zucker Andrew Ng, which was spread around last week at the height of the buzz. The clip starts mid-lecture, just as Ng is about to lay into the Musk hype machine. Ng writes on the board: “Non-virtuous circle of hype”, and draws a cycle between “Evil AI hype”, “funding” for research into AI safety, and “anti-evil AI” which in turn creates a demand for more interest in evil AI hype. Ng calls this a non-virtuous circle because it creates an incentive for overemphasizing potential risks. Ng’s point is that the circle is in some sense self-fulfilling: they’re manufacturing the conditions (risk hype) that values the research they’re doing (safety research).

Ng’s argument here doesn’t make much sense as a criticism of Musk, though, given that Musk is putting up his own money to fund the safety research. It is clearly not Musk’s devious plan to hype the risks of AI in order to receive grants for safety research. Perhaps this is Bostrom’s strategy, but as a criticism of Musk Ng gets the causal arrow exactly backwards. Musk is funding those safety grants (going to Bostrom & co) in order to drive hype that fuels the rest of his portfolio. This isn’t some perpetual motion machine of hype, this is the much more banal work of corporate advertising masquerading as academic research. But I don’t think it was Ng’s intention to blow this particular whistle.

Ng’s argument in this clip is strange for another reason: it begs the central question at stake. If AI safety really were a legitimate issue, one would think a cycle driving funding to safety research would be virtuous, in that it addresses legitimate safety concerns. But Ng presumes from the outset that safety concerns are overblown, and so he locates part of the problem in the safety research itself. This is concerning. The problem with the hype is Musk’s exploitation of public fears to drive his other business ventures. It is not a correction to the hype to clamp down on safety research, but this is an immediate implication of Ng’s critique.

But the most telling part of the clip is what it leaves out: the start of the lecture, where Ng describes his preferred alternative. The clip starts with half the whiteboard already covered in the “Virtuous circle of AI”, which runs between “Products”, “Users”, and “Data”. This is essentially the business model for the Tech Stacks, and it generates not hype but corporate profit. At no point does his preferred circle pause for safety considerations. So Ng unwittingly lays out the corporate perspective on this debate: the strategy that generates user data and corporate profits is good, the strategy that generates public distrust and an emphasis on safety is bad. It’s worth noting here that Andrew Ng has not yet signed the FLI Open Letter on autonomous weapons.

Ng’s video clearly reveals the many-headed Corporate Scylla hiding just behind Zuckerberg’s rebuke. Musk’s hype-vortex Charybdis is not an attractive alternative. A world where these poles exhaust the narratives would be deeply unsatisfying.

Tag yourself I’m the erosion. Source

Fortunately, the world is never so simple. These weary propaganda wars leave out the small cluster of academic ethicists, lawyers, and policy researchers working on AI that takes both safety and responsibility seriously. I consider myself to be a peripheral member of this cluster, and in this fight we have found ourselves caught in the gilded crossfire.

For instance, consider my friends at the Campaign to Stop Killer Robots, a group which lobbies the UN and other agencies calling for international treaties restricting the development and use of autonomous weapons systems. People from the Campaign have received some of Musk’s FLI money for safety research, but this fight between Musk and Zuckerberg has forced @BanKillerRobots into the awkward position of clarifying that they’re concerned about a different kind of killer robot:

This scuffle between Musk and Zuckerberg doesn’t concern the killer robots that political activists have long been fighting against; the hard work of this dedicated community is simply overshadowed by the otherwise irrelevant bickering of titans.

The Campaign has been around for years, but as Bostrom’s work has gained popularity, a cottage industry of AI safety research has arisen to soak up the funding. These researchers have no particular allegiance with either Musk’s or Zuckerberg’s grand technovisions. They are simply scholars interested in the ethical, industrial, and policy dimensions of AI safety, and they are taking advantage of a plentiful season. Like most academics, AI safety researchers mostly find the media hype to be tedious and unnecessary. Nevertheless, the literature frames its discussion in terms amenable to the Bostrom/LessWrong approach. That includes talk of “ethical alignment”, Friendly AI, paperclip maximizers, and so on.

These researchers aren’t eager to tie themselves to the increasingly tarnished reputation of the Muskovite bandwagon. But they also aren’t ready to abandon the interests and theoretical commitments they’ve developed in the course of an academic career. These positions have come under attack from the Zuckerberg/Ng dismissal of AI safety, and the media frenzy makes it difficult to defend AI safety concerns without sounding like a defense of Musk’s sensationalism. Hence the minor panic and concern rippling through my networks.

From the public’s perspective, the key lessons reinforced by this minor media event are as follows:

  1. Musk’s views fall outside the corporate mainstream of AI research
  2. The corporate mainstream of AI research doesn’t care about safety

These lessons are overly simplistic, of course. But they aren’t exactly wrong, either.

From the perspective of AI research, I expect to see Bostrom/LessWrong style arguments to fall further out of fashion, even while research into safety grows substantially. To me, these both seem like positive developments in the field.

From a critical perspective, I expect to see battles over safety and corporate AI to continue long past these early, sensationalist concerns. The history of consumer advocacy tells us that safety only sells when consumers demand it, and that corporations will tend to resist these demands. We shouldn’t let accusations of sensationalism prevent us from being vocal about safety.

With this in mind, I thought I’d close by sharing some good, non-sensationalist policy work being done by safety researchers in my community. The videos below come from a conference last October at NYU on ethics in AI.

From the video above, I especially recommend the first and second talks, Peter Asaro’s talk (1:30) on Autonomous Weapons Systems, and Kate Devlin’s talk (33:20) on Sex Robots.

From the video above, I recommend Eric Schwitzgebel and Mara Garza’s talk (35:00) on AI rights, and John Basl and Ronald Sandler’s (1:06:00) talk on Oversight Committees for AI research. You can watch the full conference here, which gives a great window into the state of the field.

I’m also a big fan of work by researchers in Kyoto on the abuse of public service robots, and of researchers at Tufts teaching robots to say no to human commands. I’d also point to AI safety researchers and social scientists like Julie Carpenter, Roman V. Yampolskiy, Patrick Lin, David Gunkel, and Joanna Bryson as doing solid, professional work on AI ethics and safety. This work ought to be celebrated and developed outside the interests of the corporate hype train. I have criticisms to offer in this domain, of course, but this community is not “irresponsible” by any stretch of the imagination. These scholars are thinking much harder about our future and safety than either Musk or Zuckerberg, and we ought to allocate our attention accordingly.


Published by HackerNoon on 2017/07/27