Fair AI needs a fair society. How do we deliver?

Written by azeem | Published 2017/10/30
Tech Story Tags: artificial-intelligence | technology | business | mustafa-suleyman | deepmind

TLDRvia the TL;DR App

We spend a lot of time at Exponential View talking about the societal impact of AI. This week, I asked Mustafa Suleyman, the co-founder of DeepMind, to put together an Exponential View with the things he is thinking about.

The following is an excerpt from Exponential View #137. Full issue is here.

I’m Mustafa (@mustafasuleymn), and I’ve spent most of my life working on ways to tackle complex social problems at scale. While the world continues to improve in many respects, we remain overwhelmed by injustice and suffering on a level that can often feel impossible to address.

Even if the history of technology transforming society for the better is a familiar narrative, we can’t assume beneficial outcomes will emerge by default. In fact as technology advances, ethical challenges will continue to grow. I do, however, believe that AI will play a crucial role in helping people unearth new strategies to solve complex social problems and make more efficient use of our natural resources.

This positive future won’t happen simply because AI systems magically discover how to do the right thing. It will be because we, as a society, collectively figure out how we can justly use, govern and control these systems. It also means finding ways to widely and fairly distribute their benefits. For me, an integral part of ethical AI development is provoking wider conversations about fairness, justice and the kind of world we want to live in, and to feed those insights back into these technologies.

This is going to be really hard. It means bridging gaps between technologists, companies, government, civil society and the wider public — gaps that can often seem painfully large — to make sure we shape these tools together and put social impact and ethics at the heart of everything we do.

Changing incentives and doing better

As a society, we’re struggling with a daunting set of problems, from neglected infectious diseases affecting a billion people to raging inequality and frighteningly unsustainable consumption. Technology can help us understand and tackle these challenges, but there is less faith than ever in this once-feted industry.

This TechCrunch piece sums up the commentary about the tech industry over the last few months, arguing that Silicon Valley has gone from “hero to villain”.

The industry is finally on a hard but essential journey towards greater diversity and representation. As the flurry of #metoo turns into a blizzard, and anger rightly turns on #himtoo, it’s clear that this journey is far from over. From images of hard-drinking brogrammers calling the shots, to the exposure of the misogynist culture of some Valley VCs and the underrepresentation of women and minorities (see their stories and the Kapor Center’s study on how it affects career choices), we must all consider what changes are necessary to create the safety and room required for diverse voices not just to be heard, but to play an equal role in shaping our societies at every level.

We also need to rethink the standard metrics our industry uses to measure progress — investment round valuations (which are severely inflated according to Stanford researchers), “active users” (read Paul Lewis’s excellent summary of the attention/distraction debate) and revenues are the crudest of proxies for company success, and largely ignore externalities, longer-term consequences, diversity and wider social purpose, to name a few. This FT view proposed that businesses should “fix the failures of capitalism” by coming up with “concrete proposals on how to change executives’ incentives”, which even the head of the CBI blamed on “a fixation on shareholder value at the expense of purpose”.

Tech isn’t value-neutral, and technologists need to factor in the downstream effects of their efforts, as this excellent AI Now report argues. Well-intended actions can still cause unintended harms. Where might classroom technologies that measure students engagement levels by monitoring their brainwaves take us? Such a narrow field of vision can lead the market to prioritise ‘first world’ problems (rentable ‘manservants’ and personalised soda drinks) over real-world problems.

If you enjoyed reading this excerpt from Exponential View_,_ you can read subscribe below.


Written by azeem | i am new here.
Published by HackerNoon on 2017/10/30