paint-brush
The real danger of Artificial Intelligence it’s not what you thinkby@jdanielnd
8,343 reads
8,343 reads

The real danger of Artificial Intelligence it’s not what you think

by João DuarteNovember 8th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

On November 8th to 10th, I attended the <strong>Global Symposium: Artificial Intelligence &amp; Inclusion</strong>. There were almost 200 participants, invited from all over the world.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - The real danger of Artificial Intelligence it’s not what you think
João Duarte HackerNoon profile picture

On November 8th to 10th, I attended the Global Symposium: Artificial Intelligence & Inclusion. There were almost 200 participants, invited from all over the world.

I was particularly interested in two topics: (1) how to create AI solutions that respect societies’ cultural values and (2) how to use AI to increase inclusion in education.

Here are my five takeaways:

1. AI can cause harm (even if not intended to)

The rise of Artificial Intelligence brings a lot of benefits in areas such as education, health, media, government, and others. But it can also cause harm and discrimination.

This deviant behavior can happen by:¹

  • Using biased or poor quality data to train models;
  • Having poorly defined rules
  • Using it out of context
  • Creating feedback loops

Solid Bomb Gold was a T-shirt company who used AI to generate phrases to print on its products. One of the phrases its algorithm created was “Keep Calm and Rape a Lot.” This, of course, caused terrible publicity. So Amazon revoked him as a seller, and the company ended up closing its doors.

https://boingboing.net/2013/03/02/how-an-algorithm-came-up-with.html

The algorithm wasn’t build to generate this kind of offensive content. But whoever developed it, designed it to simply pick any verb from a list². And this list contained the word rape. They had bad data and poorly defined rules.

Nikon launched a feature that warns the photographer whenever someone blinked on the picture. Some people reported it was detecting false blinks on Asian people pictures³.

https://gizmodo.com/5256650/camera-misses-the-mark-on-racial-sensitivity

Although it is a Japanese company, it seems to have used biased training data in its algorithms. If the algorithm was developed outside Asia but deployed worldwide, it also lacks regional context.

Do you remember the movie Minority Report? In this film, machines predicted where and when crime would happen so the police could anticipate it.

Scene from the movie Minority Report

This is happening right now. There is a company called PredPol, which uses AI to predict where crime is most likely to occur and help the police better use its patrol resources.

The data used by the company to train its model is from crimes recorded by the police. But not every felony has the same chance of being recorded by the police. Crimes committed in areas that are heavily patrolled by police are more likely to be recorded⁴.

This can cause a feedback loop that reinforces the police to keep patrolling the same areas it already does. They use biased data and create feedback loops.

So, yeah! Even if it’s not designed to harm, AI can create harmful or discriminatory situations.

2. AI can learn how to be fair (and might learn better than humans)

It’s not easy to define what is fair and what is not. It involves cultural aspects, value judgment, and different point of views.

But if there are cases we can agree upon, AI can learn from them.

AI works learning how to use variables to achieve an objective function. We can improve AI fairness by translating these fairness statements either into objective functions or variables restrictions.

If we intentionally teach fairness concepts to AI, it will use them better than humans, because it is much more efficient than us in following formal rules.

For instance, the Brazilian NGO Desabafo Social created a campaign exposing how search image engines such as Shutterstock and Google discriminate against race. This is a powerful example of how bias and discrimination are invisibly replicated in the digital world.

If you search for “baby” in Shutterstock, you can expect only to see white babies.

This search engine doesn’t reflect our population. But it can be taught to do that.

If we set “reflecting population diversity” as one of its goals, it can use a massive amount of data to guarantee it’s representative regarding race, genre, and cultures.

Machines can do that much more efficiently than humans.

So, AI is discriminatory because it learned that way. But it doesn’t need to be like that.

3. AI is forcing us to rethink ethical and moral principles

As we’ve seen before, AI tries to optimize variables to reach a specific goal.

The process of thinking about these goals are raising ethical and moral questions that have been around for a while but now became relevant again.

Other questions are new and arise from the possibilities that AI brings to the table.

The first class of Harvard’s Justice course, by philosopher Michael Sandel, presents a moral dilemma called the Trolley problem, and it was first stated in 1905.

You are the driver of a runaway trolley which can only steer from one narrow track on to another; five men are tied to one track and one man to the other.⁵

If you do nothing, the trolley will kill five people. Yet, if you switch the track, it will kill only one.

This same kind of problem is being faced by autonomous vehicles engineers, who need to decide whether to prioritize the lives of people inside or outside the vehicle.

4. There is no such thing as AI neutrality

When we interact with AI, our life changes.

For instance, if we use Google Maps to find the shortest path to somewhere, we don’t have to ask people on the street anymore. This changes the relationship we have with a city when we’re traveling.

The relationship kids are developing with Siri and Alexa is a double-edged sword. They are becoming more curious since they can ask anything at any time. But parents are reporting that they are becoming less polite as well.⁶

(Illustration by Bee Johnson for The Washington Post)

You don’t have to use words like “please” and “thank you” with AI personal assistants, and this changes how children learn to interact with the world.

‘Algorithms aren’t neutral. If we change the algorithm, we change the reality’ — Desabafo Social

However, AI engineers can change this by programming assistants to reply “Hey, you have to use the magic word!” when rudely asked to do something.

This is an opportunity to create intentionalities within AI and defend specifics point of views through it.

So, although AI cannot be neutral, it can be a tool to shape desired behaviors in society.

Instead of looking for neutrality, we should aim at transparent intentionality.

5. AI needs diversity for ensuring success

AI algorithms will always carry the point of view of their designers. So, the best way to ensure the success of AI is to promote diversity in this field.

If we have AI engineers with different nationalities, genres, sexual orientation, colors and cultures, the algorithms will replicate this diversity.

For this to happen, the access to education on AI should increase. It’s essential to guarantee that different groups of people have access to this knowledge.

Kathleen Siminyu is a Data Scientist from Kenya who organizes the Nairobi Chapter of Woman in Machine Learning and Data Science. This project plays a vital role in increasing diversity among AI engineers.

For me, this event was an excellent opportunity to learn more about the risks of AI to our society. It was great to see so many intelligent people engaged in discussing these problems.

I also want to give a thumbs up to ITS Rio and the Berkman Klein Center, from Harvard, for gathering such a diverse public and organizing such an awesome event.

Finally, I hope this article could also raise your awareness of those risks. I believe this is the first step to creating a more fair and inclusive Artificial Intelligence.

References

[1] http://webfoundation.org/docs/2017/07/Algorithms_Report_WF.pdf

[2] https://boingboing.net/2013/03/02/how-an-algorithm-came-up-with.html

[3] https://gizmodo.com/5256650/camera-misses-the-mark-on-racial-sensitivity

[4] https://www.themarshallproject.org/2016/02/03/policing-the-future?ref=hp-2-111#.UyhBLnmlj

[5] https://en.wikipedia.org/wiki/Trolley_problem

[6] https://www.washingtonpost.com/local/how-millions-of-kids-are-being-shaped-by-know-it-all-voice-assistants/2017/03/01/c0a644c4-ef1c-11e6-b4ff-ac2cf509efe5_story.html?utm_term=.caf4e3367230