On November 8th to 10th, I attended the Global Symposium: Artificial Intelligence & Inclusion. There were almost 200 participants, invited from all over the world.
I was particularly interested in two topics: (1) how to create AI solutions that respect societies’ cultural values and (2) how to use AI to increase inclusion in education.
Here are my five takeaways:
The rise of Artificial Intelligence brings a lot of benefits in areas such as education, health, media, government, and others. But it can also cause harm and discrimination.
This deviant behavior can happen by:¹
Solid Bomb Gold was a T-shirt company who used AI to generate phrases to print on its products. One of the phrases its algorithm created was “Keep Calm and Rape a Lot.” This, of course, caused terrible publicity. So Amazon revoked him as a seller, and the company ended up closing its doors.
The algorithm wasn’t build to generate this kind of offensive content. But whoever developed it, designed it to simply pick any verb from a list². And this list contained the word rape. They had bad data and poorly defined rules.
Nikon launched a feature that warns the photographer whenever someone blinked on the picture. Some people reported it was detecting false blinks on Asian people pictures³.
Although it is a Japanese company, it seems to have used biased training data in its algorithms. If the algorithm was developed outside Asia but deployed worldwide, it also lacks regional context.
Do you remember the movie Minority Report? In this film, machines predicted where and when crime would happen so the police could anticipate it.
Scene from the movie Minority Report
This is happening right now. There is a company called PredPol, which uses AI to predict where crime is most likely to occur and help the police better use its patrol resources.
The data used by the company to train its model is from crimes recorded by the police. But not every felony has the same chance of being recorded by the police. Crimes committed in areas that are heavily patrolled by police are more likely to be recorded⁴.
This can cause a feedback loop that reinforces the police to keep patrolling the same areas it already does. They use biased data and create feedback loops.
So, yeah! Even if it’s not designed to harm, AI can create harmful or discriminatory situations.
It’s not easy to define what is fair and what is not. It involves cultural aspects, value judgment, and different point of views.
But if there are cases we can agree upon, AI can learn from them.
AI works learning how to use variables to achieve an objective function. We can improve AI fairness by translating these fairness statements either into objective functions or variables restrictions.
If we intentionally teach fairness concepts to AI, it will use them better than humans, because it is much more efficient than us in following formal rules.
For instance, the Brazilian NGO Desabafo Social created a campaign exposing how search image engines such as Shutterstock and Google discriminate against race. This is a powerful example of how bias and discrimination are invisibly replicated in the digital world.
If you search for “baby” in Shutterstock, you can expect only to see white babies.
This search engine doesn’t reflect our population. But it can be taught to do that.
If we set “reflecting population diversity” as one of its goals, it can use a massive amount of data to guarantee it’s representative regarding race, genre, and cultures.
Machines can do that much more efficiently than humans.
So, AI is discriminatory because it learned that way. But it doesn’t need to be like that.
As we’ve seen before, AI tries to optimize variables to reach a specific goal.
The process of thinking about these goals are raising ethical and moral questions that have been around for a while but now became relevant again.
Other questions are new and arise from the possibilities that AI brings to the table.
The first class of Harvard’s Justice course, by philosopher Michael Sandel, presents a moral dilemma called the Trolley problem, and it was first stated in 1905.
You are the driver of a runaway trolley which can only steer from one narrow track on to another; five men are tied to one track and one man to the other.⁵
If you do nothing, the trolley will kill five people. Yet, if you switch the track, it will kill only one.
This same kind of problem is being faced by autonomous vehicles engineers, who need to decide whether to prioritize the lives of people inside or outside the vehicle.
When we interact with AI, our life changes.
For instance, if we use Google Maps to find the shortest path to somewhere, we don’t have to ask people on the street anymore. This changes the relationship we have with a city when we’re traveling.
The relationship kids are developing with Siri and Alexa is a double-edged sword. They are becoming more curious since they can ask anything at any time. But parents are reporting that they are becoming less polite as well.⁶
(Illustration by Bee Johnson for The Washington Post)
You don’t have to use words like “please” and “thank you” with AI personal assistants, and this changes how children learn to interact with the world.
‘Algorithms aren’t neutral. If we change the algorithm, we change the reality’ — Desabafo Social
However, AI engineers can change this by programming assistants to reply “Hey, you have to use the magic word!” when rudely asked to do something.
This is an opportunity to create intentionalities within AI and defend specifics point of views through it.
So, although AI cannot be neutral, it can be a tool to shape desired behaviors in society.
Instead of looking for neutrality, we should aim at transparent intentionality.
AI algorithms will always carry the point of view of their designers. So, the best way to ensure the success of AI is to promote diversity in this field.
If we have AI engineers with different nationalities, genres, sexual orientation, colors and cultures, the algorithms will replicate this diversity.
For this to happen, the access to education on AI should increase. It’s essential to guarantee that different groups of people have access to this knowledge.
Kathleen Siminyu is a Data Scientist from Kenya who organizes the Nairobi Chapter of Woman in Machine Learning and Data Science. This project plays a vital role in increasing diversity among AI engineers.
For me, this event was an excellent opportunity to learn more about the risks of AI to our society. It was great to see so many intelligent people engaged in discussing these problems.
I also want to give a thumbs up to ITS Rio and the Berkman Klein Center, from Harvard, for gathering such a diverse public and organizing such an awesome event.
Finally, I hope this article could also raise your awareness of those risks. I believe this is the first step to creating a more fair and inclusive Artificial Intelligence.