Fairness in AI: Navigating Complex Ethical AI Dilemmas with Beena Ammanath

Written by linked_do | Published 2023/12/20
Tech Story Tags: ai | ai-ethics | responsible-ai | stakeholder-framework | longview | fairness-in-ai | trustworthy-ai | ethical-ai

TLDRTrustworthy AI aims to provide a holistic framework to identify Important questions to ask when developing or using AI. Trustworthy AI should be fair and impartial, robust and reliable, transparent, explainable, secure, safe, accountable, responsible and with privacy.via the TL;DR App

What does fairness in AI mean, and is it relevant in your use case?

This question is posed by Beena Ammanath. Ammanath is the Global Head of the Deloitte AI Institute, Founder of Humans For AI, and Board Member of AnitaB.org, as well as an author. I’ve had the pleasure of conversing with her a couple of times about her book Trustworthy AI.

The term “trustworthy AI” has been used in places ranging from the EU Commission to IBM and from the ACM to Deloitte. Ammanath defines Trustworthy AI as an amalgamation of AI Ethics, Responsible AI, and AI Safety. In her book, she lists and elaborates on the multiple dimensions she sees as collectively defining trustworthy AI.

Trustworthy AI should be fair and impartial, robust and reliable, transparent, explainable, secure, safe, accountable, responsible, and with privacy. There is an entire chapter dedicated to each of these dimensions in the book.

We couldn’t possibly do justice to all those dimensions in one conversation or even in two. However, in our first conversation in 2022, Ammanath emphasized the importance of asking the right questions when developing or applying AI. Trustworthy AI aims to provide a holistic framework to identify Important questions to ask when developing or using AI.

https://www.youtube.com/watch?v=UVVONduoKTE&embedable=true

What are the important questions to ask when developing or using AI?

Fairness and bias are a good example. Even though those are probably the first things that come to mind when people talk about AI Ethics, they aren’t always necessarily the most relevant ones as Ammanath points out.

First off, it’s practically impossible to have a completely unbiased AI system. As developing AI systems is a function of the data used to train AI models, the systems will inevitably reflect the bias found in the training data.

Ammanath said that in most cases, when the data used does not pertain to people, then fairness is probably not relevant. She used a couple of examples to illustrate that point in our second conversation in 2023.

When using AI to predict an engine failure, fairness is not relevant. But when using an AI tool to recruit potential candidates for your organization, fairness is crucial.

Another more contentious example Ammanath shared was AI-powered facial recognition. A biased system used for suspect identification may result in innocent people being tagged as potential criminals. Therefore, this is clearly an application in which fairness cannot be compromised.

But that exact same tool in almost the exact same physical context can be and is actually being used, as per Ammanath, to identify kidnapping and human trafficking victims. What is the acceptable level of fairness in that scenario?

Is it okay to have false positives when using an AI tool, which may mean helping rescue 50% or 60% more people than it would be possible without using it? What about 30%? What are the “right” levels of false positives?

In the suspect identification case, a false positive may result in detention or even charges and arrest if there are no guardrails in place. In the victim identification case, a false positive may result in spending investigation time and resources as well as upsetting families.

In both cases, there are dilemmas involved that transcend technology and touch on ethics and law. But even so, in both cases, “fairness” seems to correspond to “false positives” in a rather straightforward way. In other applications, the correspondence may be harder to make out.

Is there a way out of the conundrum? Ammanth’s answer is – stakeholders have to come together and make an informed decision. This, we may add, also touches upon transparency.

Transparency is needed in order to make informed decisions, both in terms of who should be a part of the decision but also in terms of sharing the relevant information. Transparency is also needed to share the decisions of the stakeholders as needed.

Ammanath also shared some measures to improve fairness in AI systems. First, diversity. Diverse teams are more likely to offer diverse perspectives, leading to more fair outcomes. Second, data quality. Evaluating training data to make sure they are not biased, possibly using synthetic data to compensate for missing sample data.

The key here is to reduce unintended consequences. We are probably never going to be able to completely eradicate all unintended consequences, but we should try to minimize them by being proactive and thoughtful.


This is the second in a series of short posts about important questions posed by people who think deeply and far about technology and its impact, focusing on AI in particular. The idea is to introduce a question and tease some answers as provided and/or inspired by the person who posed the question. Feedback welcome.

Join the Orchestrate All the Things Newsletter

https://linkeddataorchestration.com/orchestrate-all-the-things/newsletter/?embedable=true

Stories about how Technology, Data, AI, and Media flow into each other shaping our lives.Analysis, Essays, Interviews, and News. Mid-to-long form, 1-3 times per month.


Also published here.


Written by linked_do | Got Tech, Data, AI and Media, and he's not afraid to use them.
Published by HackerNoon on 2023/12/20